00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1067 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3729 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.070 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.071 The recommended git tool is: git 00:00:00.071 using credential 00000000-0000-0000-0000-000000000002 00:00:00.073 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.109 Fetching changes from the remote Git repository 00:00:00.111 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.161 Using shallow fetch with depth 1 00:00:00.161 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.161 > git --version # timeout=10 00:00:00.197 > git --version # 'git version 2.39.2' 00:00:00.197 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.223 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.223 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.535 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.547 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.559 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.559 > git config core.sparsecheckout # timeout=10 00:00:05.569 > git read-tree -mu HEAD # timeout=10 00:00:05.587 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.608 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.608 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.698 [Pipeline] Start of Pipeline 00:00:05.709 [Pipeline] library 00:00:05.710 Loading library shm_lib@master 00:00:05.711 Library shm_lib@master is cached. Copying from home. 00:00:05.725 [Pipeline] node 00:00:05.738 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.740 [Pipeline] { 00:00:05.750 [Pipeline] catchError 00:00:05.751 [Pipeline] { 00:00:05.763 [Pipeline] wrap 00:00:05.773 [Pipeline] { 00:00:05.781 [Pipeline] stage 00:00:05.783 [Pipeline] { (Prologue) 00:00:06.036 [Pipeline] sh 00:00:06.319 + logger -p user.info -t JENKINS-CI 00:00:06.337 [Pipeline] echo 00:00:06.338 Node: WFP4 00:00:06.344 [Pipeline] sh 00:00:06.639 [Pipeline] setCustomBuildProperty 00:00:06.652 [Pipeline] echo 00:00:06.654 Cleanup processes 00:00:06.659 [Pipeline] sh 00:00:06.942 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.943 674642 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.953 [Pipeline] sh 00:00:07.234 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.234 ++ grep -v 'sudo pgrep' 00:00:07.234 ++ awk '{print $1}' 00:00:07.234 + sudo kill -9 00:00:07.234 + true 00:00:07.248 [Pipeline] cleanWs 00:00:07.256 [WS-CLEANUP] Deleting project workspace... 00:00:07.256 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.262 [WS-CLEANUP] done 00:00:07.267 [Pipeline] setCustomBuildProperty 00:00:07.277 [Pipeline] sh 00:00:07.557 + sudo git config --global --replace-all safe.directory '*' 00:00:07.655 [Pipeline] httpRequest 00:00:08.261 [Pipeline] echo 00:00:08.263 Sorcerer 10.211.164.20 is alive 00:00:08.274 [Pipeline] retry 00:00:08.276 [Pipeline] { 00:00:08.290 [Pipeline] httpRequest 00:00:08.294 HttpMethod: GET 00:00:08.295 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.295 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.312 Response Code: HTTP/1.1 200 OK 00:00:08.312 Success: Status code 200 is in the accepted range: 200,404 00:00:08.312 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:27.203 [Pipeline] } 00:00:27.220 [Pipeline] // retry 00:00:27.228 [Pipeline] sh 00:00:27.513 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:27.529 [Pipeline] httpRequest 00:00:27.924 [Pipeline] echo 00:00:27.926 Sorcerer 10.211.164.20 is alive 00:00:27.935 [Pipeline] retry 00:00:27.937 [Pipeline] { 00:00:27.952 [Pipeline] httpRequest 00:00:27.956 HttpMethod: GET 00:00:27.956 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:27.957 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:27.981 Response Code: HTTP/1.1 200 OK 00:00:27.981 Success: Status code 200 is in the accepted range: 200,404 00:00:27.982 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:36.740 [Pipeline] } 00:01:36.757 [Pipeline] // retry 00:01:36.764 [Pipeline] sh 00:01:37.053 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:39.601 [Pipeline] sh 00:01:39.886 + git -C spdk log --oneline -n5 00:01:39.886 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:39.886 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:39.886 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:39.886 66289a6db build: use VERSION file for storing version 00:01:39.886 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:39.904 [Pipeline] withCredentials 00:01:39.914 > git --version # timeout=10 00:01:39.926 > git --version # 'git version 2.39.2' 00:01:39.943 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:39.945 [Pipeline] { 00:01:39.953 [Pipeline] retry 00:01:39.955 [Pipeline] { 00:01:39.970 [Pipeline] sh 00:01:40.254 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:40.525 [Pipeline] } 00:01:40.543 [Pipeline] // retry 00:01:40.548 [Pipeline] } 00:01:40.564 [Pipeline] // withCredentials 00:01:40.574 [Pipeline] httpRequest 00:01:40.941 [Pipeline] echo 00:01:40.943 Sorcerer 10.211.164.20 is alive 00:01:40.952 [Pipeline] retry 00:01:40.954 [Pipeline] { 00:01:40.969 [Pipeline] httpRequest 00:01:40.973 HttpMethod: GET 00:01:40.974 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:40.974 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:40.990 Response Code: HTTP/1.1 200 OK 00:01:40.990 Success: Status code 200 is in the accepted range: 200,404 00:01:40.991 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:58.859 [Pipeline] } 00:01:58.877 [Pipeline] // retry 00:01:58.885 [Pipeline] sh 00:01:59.174 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:02:00.564 [Pipeline] sh 00:02:00.849 + git -C dpdk log --oneline -n5 00:02:00.849 eeb0605f11 version: 23.11.0 00:02:00.849 238778122a doc: update release notes for 23.11 00:02:00.849 46aa6b3cfc doc: fix description of RSS features 00:02:00.849 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:00.849 7e421ae345 devtools: support skipping forbid rule check 00:02:00.859 [Pipeline] } 00:02:00.873 [Pipeline] // stage 00:02:00.882 [Pipeline] stage 00:02:00.884 [Pipeline] { (Prepare) 00:02:00.904 [Pipeline] writeFile 00:02:00.919 [Pipeline] sh 00:02:01.204 + logger -p user.info -t JENKINS-CI 00:02:01.217 [Pipeline] sh 00:02:01.502 + logger -p user.info -t JENKINS-CI 00:02:01.514 [Pipeline] sh 00:02:01.800 + cat autorun-spdk.conf 00:02:01.800 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.800 SPDK_TEST_NVMF=1 00:02:01.800 SPDK_TEST_NVME_CLI=1 00:02:01.800 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.800 SPDK_TEST_NVMF_NICS=e810 00:02:01.800 SPDK_TEST_VFIOUSER=1 00:02:01.800 SPDK_RUN_UBSAN=1 00:02:01.800 NET_TYPE=phy 00:02:01.800 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:01.800 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:01.807 RUN_NIGHTLY=1 00:02:01.811 [Pipeline] readFile 00:02:01.836 [Pipeline] withEnv 00:02:01.838 [Pipeline] { 00:02:01.851 [Pipeline] sh 00:02:02.141 + set -ex 00:02:02.141 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:02.141 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:02.141 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.141 ++ SPDK_TEST_NVMF=1 00:02:02.141 ++ SPDK_TEST_NVME_CLI=1 00:02:02.141 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.141 ++ SPDK_TEST_NVMF_NICS=e810 00:02:02.141 ++ SPDK_TEST_VFIOUSER=1 00:02:02.141 ++ SPDK_RUN_UBSAN=1 00:02:02.141 ++ NET_TYPE=phy 00:02:02.141 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:02.141 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:02.141 ++ RUN_NIGHTLY=1 00:02:02.141 + case $SPDK_TEST_NVMF_NICS in 00:02:02.141 + DRIVERS=ice 00:02:02.141 + [[ tcp == \r\d\m\a ]] 00:02:02.141 + [[ -n ice ]] 00:02:02.141 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:02.141 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:02.141 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:02.141 rmmod: ERROR: Module i40iw is not currently loaded 00:02:02.141 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:02.141 + true 00:02:02.141 + for D in $DRIVERS 00:02:02.141 + sudo modprobe ice 00:02:02.141 + exit 0 00:02:02.150 [Pipeline] } 00:02:02.165 [Pipeline] // withEnv 00:02:02.170 [Pipeline] } 00:02:02.184 [Pipeline] // stage 00:02:02.194 [Pipeline] catchError 00:02:02.196 [Pipeline] { 00:02:02.209 [Pipeline] timeout 00:02:02.209 Timeout set to expire in 1 hr 0 min 00:02:02.211 [Pipeline] { 00:02:02.224 [Pipeline] stage 00:02:02.226 [Pipeline] { (Tests) 00:02:02.239 [Pipeline] sh 00:02:02.525 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:02.525 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:02.525 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:02.525 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:02.525 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.525 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:02.525 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:02.525 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:02.525 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:02.525 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:02.525 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:02.525 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:02.525 + source /etc/os-release 00:02:02.525 ++ NAME='Fedora Linux' 00:02:02.525 ++ VERSION='39 (Cloud Edition)' 00:02:02.525 ++ ID=fedora 00:02:02.525 ++ VERSION_ID=39 00:02:02.525 ++ VERSION_CODENAME= 00:02:02.525 ++ PLATFORM_ID=platform:f39 00:02:02.525 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:02.525 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:02.525 ++ LOGO=fedora-logo-icon 00:02:02.525 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:02.525 ++ HOME_URL=https://fedoraproject.org/ 00:02:02.525 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:02.525 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:02.525 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:02.525 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:02.525 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:02.525 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:02.525 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:02.525 ++ SUPPORT_END=2024-11-12 00:02:02.525 ++ VARIANT='Cloud Edition' 00:02:02.525 ++ VARIANT_ID=cloud 00:02:02.525 + uname -a 00:02:02.525 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:02:02.525 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:05.062 Hugepages 00:02:05.062 node hugesize free / total 00:02:05.062 node0 1048576kB 0 / 0 00:02:05.062 node0 2048kB 0 / 0 00:02:05.062 node1 1048576kB 0 / 0 00:02:05.062 node1 2048kB 0 / 0 00:02:05.062 00:02:05.062 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:05.062 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:05.062 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:05.062 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:05.062 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:05.062 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:05.062 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:05.062 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:05.062 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:05.062 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:05.062 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:05.062 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:05.062 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:05.062 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:05.062 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:05.062 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:05.062 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:05.062 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:05.062 + rm -f /tmp/spdk-ld-path 00:02:05.062 + source autorun-spdk.conf 00:02:05.062 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.062 ++ SPDK_TEST_NVMF=1 00:02:05.062 ++ SPDK_TEST_NVME_CLI=1 00:02:05.062 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.062 ++ SPDK_TEST_NVMF_NICS=e810 00:02:05.062 ++ SPDK_TEST_VFIOUSER=1 00:02:05.062 ++ SPDK_RUN_UBSAN=1 00:02:05.062 ++ NET_TYPE=phy 00:02:05.062 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.062 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:05.062 ++ RUN_NIGHTLY=1 00:02:05.062 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:05.062 + [[ -n '' ]] 00:02:05.062 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:05.062 + for M in /var/spdk/build-*-manifest.txt 00:02:05.062 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:05.062 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:05.062 + for M in /var/spdk/build-*-manifest.txt 00:02:05.062 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:05.062 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:05.062 + for M in /var/spdk/build-*-manifest.txt 00:02:05.062 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:05.062 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:05.062 ++ uname 00:02:05.062 + [[ Linux == \L\i\n\u\x ]] 00:02:05.062 + sudo dmesg -T 00:02:05.062 + sudo dmesg --clear 00:02:05.322 + dmesg_pid=675614 00:02:05.322 + [[ Fedora Linux == FreeBSD ]] 00:02:05.322 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.322 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.322 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:05.322 + [[ -x /usr/src/fio-static/fio ]] 00:02:05.322 + export FIO_BIN=/usr/src/fio-static/fio 00:02:05.322 + FIO_BIN=/usr/src/fio-static/fio 00:02:05.322 + sudo dmesg -Tw 00:02:05.322 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:05.322 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:05.322 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:05.322 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.322 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.322 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:05.322 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.322 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.322 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:05.322 02:23:35 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:05.322 02:23:35 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:05.322 02:23:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.322 02:23:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:05.322 02:23:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:05.322 02:23:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.322 02:23:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:05.322 02:23:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:05.322 02:23:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:05.322 02:23:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:05.322 02:23:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:05.322 02:23:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:05.322 02:23:35 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:05.322 02:23:35 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:05.322 02:23:35 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:05.322 02:23:35 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:05.322 02:23:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:05.322 02:23:35 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:05.322 02:23:35 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:05.322 02:23:35 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.322 02:23:35 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.322 02:23:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.322 02:23:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.322 02:23:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.322 02:23:35 -- paths/export.sh@5 -- $ export PATH 00:02:05.322 02:23:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.322 02:23:35 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:05.322 02:23:35 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:05.322 02:23:35 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734312215.XXXXXX 00:02:05.322 02:23:35 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734312215.ecU9K1 00:02:05.322 02:23:35 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:05.322 02:23:35 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:02:05.322 02:23:35 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:05.322 02:23:35 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:05.322 02:23:35 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:05.322 02:23:35 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:05.322 02:23:35 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:05.322 02:23:35 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:05.322 02:23:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.322 02:23:35 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:05.322 02:23:35 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:05.322 02:23:35 -- pm/common@17 -- $ local monitor 00:02:05.322 02:23:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.322 02:23:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.322 02:23:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.322 02:23:35 -- pm/common@21 -- $ date +%s 00:02:05.322 02:23:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.322 02:23:35 -- pm/common@21 -- $ date +%s 00:02:05.322 02:23:35 -- pm/common@25 -- $ sleep 1 00:02:05.322 02:23:35 -- pm/common@21 -- $ date +%s 00:02:05.322 02:23:35 -- pm/common@21 -- $ date +%s 00:02:05.322 02:23:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734312215 00:02:05.322 02:23:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734312215 00:02:05.322 02:23:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734312215 00:02:05.322 02:23:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734312215 00:02:05.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734312215_collect-cpu-load.pm.log 00:02:05.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734312215_collect-vmstat.pm.log 00:02:05.322 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734312215_collect-cpu-temp.pm.log 00:02:05.582 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734312215_collect-bmc-pm.bmc.pm.log 00:02:06.521 02:23:36 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:06.521 02:23:36 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:06.521 02:23:36 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:06.521 02:23:36 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:06.521 02:23:36 -- spdk/autobuild.sh@16 -- $ date -u 00:02:06.521 Mon Dec 16 01:23:36 AM UTC 2024 00:02:06.521 02:23:36 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:06.521 v25.01-rc1-2-ge01cb43b8 00:02:06.521 02:23:36 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:06.521 02:23:36 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:06.521 02:23:36 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:06.521 02:23:36 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:06.521 02:23:36 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:06.521 02:23:36 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.521 ************************************ 00:02:06.521 START TEST ubsan 00:02:06.521 ************************************ 00:02:06.521 02:23:36 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:06.521 using ubsan 00:02:06.521 00:02:06.521 real 0m0.000s 00:02:06.521 user 0m0.000s 00:02:06.521 sys 0m0.000s 00:02:06.521 02:23:36 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:06.521 02:23:36 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.521 ************************************ 00:02:06.521 END TEST ubsan 00:02:06.521 ************************************ 00:02:06.521 02:23:37 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:06.521 02:23:37 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:06.521 02:23:37 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:06.521 02:23:37 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:06.521 02:23:37 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:06.521 02:23:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.521 ************************************ 00:02:06.521 START TEST build_native_dpdk 00:02:06.521 ************************************ 00:02:06.521 02:23:37 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:06.521 eeb0605f11 version: 23.11.0 00:02:06.521 238778122a doc: update release notes for 23.11 00:02:06.521 46aa6b3cfc doc: fix description of RSS features 00:02:06.521 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:06.521 7e421ae345 devtools: support skipping forbid rule check 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:06.521 patching file config/rte_config.h 00:02:06.521 Hunk #1 succeeded at 60 (offset 1 line). 00:02:06.521 02:23:37 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:06.521 02:23:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:06.522 02:23:37 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:06.522 patching file lib/pcapng/rte_pcapng.c 00:02:06.522 02:23:37 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:06.522 02:23:37 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:06.522 02:23:37 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:06.522 02:23:37 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:06.522 02:23:37 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:06.522 02:23:37 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:06.522 02:23:37 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:11.878 The Meson build system 00:02:11.878 Version: 1.5.0 00:02:11.878 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:11.878 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:11.878 Build type: native build 00:02:11.878 Program cat found: YES (/usr/bin/cat) 00:02:11.878 Project name: DPDK 00:02:11.878 Project version: 23.11.0 00:02:11.878 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:11.878 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:11.878 Host machine cpu family: x86_64 00:02:11.878 Host machine cpu: x86_64 00:02:11.878 Message: ## Building in Developer Mode ## 00:02:11.878 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:11.878 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:11.878 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:11.878 Program python3 found: YES (/usr/bin/python3) 00:02:11.878 Program cat found: YES (/usr/bin/cat) 00:02:11.878 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:11.878 Compiler for C supports arguments -march=native: YES 00:02:11.878 Checking for size of "void *" : 8 00:02:11.878 Checking for size of "void *" : 8 (cached) 00:02:11.878 Library m found: YES 00:02:11.878 Library numa found: YES 00:02:11.878 Has header "numaif.h" : YES 00:02:11.878 Library fdt found: NO 00:02:11.878 Library execinfo found: NO 00:02:11.878 Has header "execinfo.h" : YES 00:02:11.878 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:11.878 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:11.878 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:11.878 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:11.878 Run-time dependency openssl found: YES 3.1.1 00:02:11.878 Run-time dependency libpcap found: YES 1.10.4 00:02:11.878 Has header "pcap.h" with dependency libpcap: YES 00:02:11.878 Compiler for C supports arguments -Wcast-qual: YES 00:02:11.878 Compiler for C supports arguments -Wdeprecated: YES 00:02:11.878 Compiler for C supports arguments -Wformat: YES 00:02:11.878 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:11.878 Compiler for C supports arguments -Wformat-security: NO 00:02:11.878 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:11.878 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:11.878 Compiler for C supports arguments -Wnested-externs: YES 00:02:11.878 Compiler for C supports arguments -Wold-style-definition: YES 00:02:11.878 Compiler for C supports arguments -Wpointer-arith: YES 00:02:11.878 Compiler for C supports arguments -Wsign-compare: YES 00:02:11.878 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:11.878 Compiler for C supports arguments -Wundef: YES 00:02:11.878 Compiler for C supports arguments -Wwrite-strings: YES 00:02:11.878 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:11.878 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:11.878 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:11.878 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:11.878 Program objdump found: YES (/usr/bin/objdump) 00:02:11.878 Compiler for C supports arguments -mavx512f: YES 00:02:11.878 Checking if "AVX512 checking" compiles: YES 00:02:11.878 Fetching value of define "__SSE4_2__" : 1 00:02:11.878 Fetching value of define "__AES__" : 1 00:02:11.878 Fetching value of define "__AVX__" : 1 00:02:11.878 Fetching value of define "__AVX2__" : 1 00:02:11.878 Fetching value of define "__AVX512BW__" : 1 00:02:11.878 Fetching value of define "__AVX512CD__" : 1 00:02:11.878 Fetching value of define "__AVX512DQ__" : 1 00:02:11.878 Fetching value of define "__AVX512F__" : 1 00:02:11.878 Fetching value of define "__AVX512VL__" : 1 00:02:11.878 Fetching value of define "__PCLMUL__" : 1 00:02:11.878 Fetching value of define "__RDRND__" : 1 00:02:11.878 Fetching value of define "__RDSEED__" : 1 00:02:11.878 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:11.878 Fetching value of define "__znver1__" : (undefined) 00:02:11.878 Fetching value of define "__znver2__" : (undefined) 00:02:11.878 Fetching value of define "__znver3__" : (undefined) 00:02:11.878 Fetching value of define "__znver4__" : (undefined) 00:02:11.878 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:11.878 Message: lib/log: Defining dependency "log" 00:02:11.878 Message: lib/kvargs: Defining dependency "kvargs" 00:02:11.878 Message: lib/telemetry: Defining dependency "telemetry" 00:02:11.878 Checking for function "getentropy" : NO 00:02:11.878 Message: lib/eal: Defining dependency "eal" 00:02:11.878 Message: lib/ring: Defining dependency "ring" 00:02:11.878 Message: lib/rcu: Defining dependency "rcu" 00:02:11.878 Message: lib/mempool: Defining dependency "mempool" 00:02:11.878 Message: lib/mbuf: Defining dependency "mbuf" 00:02:11.878 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:11.878 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.878 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.878 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.878 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:11.878 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:11.878 Compiler for C supports arguments -mpclmul: YES 00:02:11.878 Compiler for C supports arguments -maes: YES 00:02:11.878 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:11.878 Compiler for C supports arguments -mavx512bw: YES 00:02:11.878 Compiler for C supports arguments -mavx512dq: YES 00:02:11.878 Compiler for C supports arguments -mavx512vl: YES 00:02:11.878 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:11.878 Compiler for C supports arguments -mavx2: YES 00:02:11.878 Compiler for C supports arguments -mavx: YES 00:02:11.878 Message: lib/net: Defining dependency "net" 00:02:11.878 Message: lib/meter: Defining dependency "meter" 00:02:11.878 Message: lib/ethdev: Defining dependency "ethdev" 00:02:11.878 Message: lib/pci: Defining dependency "pci" 00:02:11.878 Message: lib/cmdline: Defining dependency "cmdline" 00:02:11.878 Message: lib/metrics: Defining dependency "metrics" 00:02:11.878 Message: lib/hash: Defining dependency "hash" 00:02:11.878 Message: lib/timer: Defining dependency "timer" 00:02:11.878 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.878 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:11.878 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:11.878 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.878 Message: lib/acl: Defining dependency "acl" 00:02:11.878 Message: lib/bbdev: Defining dependency "bbdev" 00:02:11.878 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:11.878 Run-time dependency libelf found: YES 0.191 00:02:11.878 Message: lib/bpf: Defining dependency "bpf" 00:02:11.878 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:11.878 Message: lib/compressdev: Defining dependency "compressdev" 00:02:11.878 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:11.878 Message: lib/distributor: Defining dependency "distributor" 00:02:11.878 Message: lib/dmadev: Defining dependency "dmadev" 00:02:11.878 Message: lib/efd: Defining dependency "efd" 00:02:11.878 Message: lib/eventdev: Defining dependency "eventdev" 00:02:11.878 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:11.878 Message: lib/gpudev: Defining dependency "gpudev" 00:02:11.878 Message: lib/gro: Defining dependency "gro" 00:02:11.878 Message: lib/gso: Defining dependency "gso" 00:02:11.878 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:11.878 Message: lib/jobstats: Defining dependency "jobstats" 00:02:11.878 Message: lib/latencystats: Defining dependency "latencystats" 00:02:11.878 Message: lib/lpm: Defining dependency "lpm" 00:02:11.878 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.878 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.878 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:11.878 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:11.878 Message: lib/member: Defining dependency "member" 00:02:11.878 Message: lib/pcapng: Defining dependency "pcapng" 00:02:11.878 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:11.878 Message: lib/power: Defining dependency "power" 00:02:11.878 Message: lib/rawdev: Defining dependency "rawdev" 00:02:11.878 Message: lib/regexdev: Defining dependency "regexdev" 00:02:11.879 Message: lib/mldev: Defining dependency "mldev" 00:02:11.879 Message: lib/rib: Defining dependency "rib" 00:02:11.879 Message: lib/reorder: Defining dependency "reorder" 00:02:11.879 Message: lib/sched: Defining dependency "sched" 00:02:11.879 Message: lib/security: Defining dependency "security" 00:02:11.879 Message: lib/stack: Defining dependency "stack" 00:02:11.879 Has header "linux/userfaultfd.h" : YES 00:02:11.879 Has header "linux/vduse.h" : YES 00:02:11.879 Message: lib/vhost: Defining dependency "vhost" 00:02:11.879 Message: lib/ipsec: Defining dependency "ipsec" 00:02:11.879 Message: lib/pdcp: Defining dependency "pdcp" 00:02:11.879 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:11.879 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:11.879 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:11.879 Message: lib/fib: Defining dependency "fib" 00:02:11.879 Message: lib/port: Defining dependency "port" 00:02:11.879 Message: lib/pdump: Defining dependency "pdump" 00:02:11.879 Message: lib/table: Defining dependency "table" 00:02:11.879 Message: lib/pipeline: Defining dependency "pipeline" 00:02:11.879 Message: lib/graph: Defining dependency "graph" 00:02:11.879 Message: lib/node: Defining dependency "node" 00:02:11.879 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:12.448 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:12.448 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:12.448 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:12.448 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:12.448 Compiler for C supports arguments -Wno-unused-value: YES 00:02:12.448 Compiler for C supports arguments -Wno-format: YES 00:02:12.448 Compiler for C supports arguments -Wno-format-security: YES 00:02:12.448 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:12.448 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:12.448 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:12.448 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:12.448 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:12.448 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:12.448 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:12.448 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:12.448 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:12.448 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:12.448 Has header "sys/epoll.h" : YES 00:02:12.448 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:12.448 Configuring doxy-api-html.conf using configuration 00:02:12.448 Configuring doxy-api-man.conf using configuration 00:02:12.448 Program mandb found: YES (/usr/bin/mandb) 00:02:12.448 Program sphinx-build found: NO 00:02:12.448 Configuring rte_build_config.h using configuration 00:02:12.448 Message: 00:02:12.448 ================= 00:02:12.448 Applications Enabled 00:02:12.448 ================= 00:02:12.448 00:02:12.448 apps: 00:02:12.448 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:12.448 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:12.448 test-pmd, test-regex, test-sad, test-security-perf, 00:02:12.448 00:02:12.448 Message: 00:02:12.448 ================= 00:02:12.448 Libraries Enabled 00:02:12.448 ================= 00:02:12.448 00:02:12.448 libs: 00:02:12.448 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:12.448 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:12.448 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:12.448 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:12.448 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:12.448 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:12.448 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:12.448 00:02:12.448 00:02:12.448 Message: 00:02:12.448 =============== 00:02:12.448 Drivers Enabled 00:02:12.448 =============== 00:02:12.448 00:02:12.448 common: 00:02:12.448 00:02:12.448 bus: 00:02:12.448 pci, vdev, 00:02:12.448 mempool: 00:02:12.448 ring, 00:02:12.448 dma: 00:02:12.448 00:02:12.448 net: 00:02:12.448 i40e, 00:02:12.448 raw: 00:02:12.448 00:02:12.448 crypto: 00:02:12.448 00:02:12.448 compress: 00:02:12.448 00:02:12.448 regex: 00:02:12.448 00:02:12.448 ml: 00:02:12.448 00:02:12.448 vdpa: 00:02:12.448 00:02:12.448 event: 00:02:12.448 00:02:12.448 baseband: 00:02:12.448 00:02:12.448 gpu: 00:02:12.448 00:02:12.448 00:02:12.448 Message: 00:02:12.448 ================= 00:02:12.448 Content Skipped 00:02:12.448 ================= 00:02:12.448 00:02:12.448 apps: 00:02:12.448 00:02:12.448 libs: 00:02:12.448 00:02:12.448 drivers: 00:02:12.448 common/cpt: not in enabled drivers build config 00:02:12.448 common/dpaax: not in enabled drivers build config 00:02:12.448 common/iavf: not in enabled drivers build config 00:02:12.448 common/idpf: not in enabled drivers build config 00:02:12.448 common/mvep: not in enabled drivers build config 00:02:12.448 common/octeontx: not in enabled drivers build config 00:02:12.448 bus/auxiliary: not in enabled drivers build config 00:02:12.448 bus/cdx: not in enabled drivers build config 00:02:12.448 bus/dpaa: not in enabled drivers build config 00:02:12.448 bus/fslmc: not in enabled drivers build config 00:02:12.448 bus/ifpga: not in enabled drivers build config 00:02:12.448 bus/platform: not in enabled drivers build config 00:02:12.448 bus/vmbus: not in enabled drivers build config 00:02:12.448 common/cnxk: not in enabled drivers build config 00:02:12.448 common/mlx5: not in enabled drivers build config 00:02:12.448 common/nfp: not in enabled drivers build config 00:02:12.449 common/qat: not in enabled drivers build config 00:02:12.449 common/sfc_efx: not in enabled drivers build config 00:02:12.449 mempool/bucket: not in enabled drivers build config 00:02:12.449 mempool/cnxk: not in enabled drivers build config 00:02:12.449 mempool/dpaa: not in enabled drivers build config 00:02:12.449 mempool/dpaa2: not in enabled drivers build config 00:02:12.449 mempool/octeontx: not in enabled drivers build config 00:02:12.449 mempool/stack: not in enabled drivers build config 00:02:12.449 dma/cnxk: not in enabled drivers build config 00:02:12.449 dma/dpaa: not in enabled drivers build config 00:02:12.449 dma/dpaa2: not in enabled drivers build config 00:02:12.449 dma/hisilicon: not in enabled drivers build config 00:02:12.449 dma/idxd: not in enabled drivers build config 00:02:12.449 dma/ioat: not in enabled drivers build config 00:02:12.449 dma/skeleton: not in enabled drivers build config 00:02:12.449 net/af_packet: not in enabled drivers build config 00:02:12.449 net/af_xdp: not in enabled drivers build config 00:02:12.449 net/ark: not in enabled drivers build config 00:02:12.449 net/atlantic: not in enabled drivers build config 00:02:12.449 net/avp: not in enabled drivers build config 00:02:12.449 net/axgbe: not in enabled drivers build config 00:02:12.449 net/bnx2x: not in enabled drivers build config 00:02:12.449 net/bnxt: not in enabled drivers build config 00:02:12.449 net/bonding: not in enabled drivers build config 00:02:12.449 net/cnxk: not in enabled drivers build config 00:02:12.449 net/cpfl: not in enabled drivers build config 00:02:12.449 net/cxgbe: not in enabled drivers build config 00:02:12.449 net/dpaa: not in enabled drivers build config 00:02:12.449 net/dpaa2: not in enabled drivers build config 00:02:12.449 net/e1000: not in enabled drivers build config 00:02:12.449 net/ena: not in enabled drivers build config 00:02:12.449 net/enetc: not in enabled drivers build config 00:02:12.449 net/enetfec: not in enabled drivers build config 00:02:12.449 net/enic: not in enabled drivers build config 00:02:12.449 net/failsafe: not in enabled drivers build config 00:02:12.449 net/fm10k: not in enabled drivers build config 00:02:12.449 net/gve: not in enabled drivers build config 00:02:12.449 net/hinic: not in enabled drivers build config 00:02:12.449 net/hns3: not in enabled drivers build config 00:02:12.449 net/iavf: not in enabled drivers build config 00:02:12.449 net/ice: not in enabled drivers build config 00:02:12.449 net/idpf: not in enabled drivers build config 00:02:12.449 net/igc: not in enabled drivers build config 00:02:12.449 net/ionic: not in enabled drivers build config 00:02:12.449 net/ipn3ke: not in enabled drivers build config 00:02:12.449 net/ixgbe: not in enabled drivers build config 00:02:12.449 net/mana: not in enabled drivers build config 00:02:12.449 net/memif: not in enabled drivers build config 00:02:12.449 net/mlx4: not in enabled drivers build config 00:02:12.449 net/mlx5: not in enabled drivers build config 00:02:12.449 net/mvneta: not in enabled drivers build config 00:02:12.449 net/mvpp2: not in enabled drivers build config 00:02:12.449 net/netvsc: not in enabled drivers build config 00:02:12.449 net/nfb: not in enabled drivers build config 00:02:12.449 net/nfp: not in enabled drivers build config 00:02:12.449 net/ngbe: not in enabled drivers build config 00:02:12.449 net/null: not in enabled drivers build config 00:02:12.449 net/octeontx: not in enabled drivers build config 00:02:12.449 net/octeon_ep: not in enabled drivers build config 00:02:12.449 net/pcap: not in enabled drivers build config 00:02:12.449 net/pfe: not in enabled drivers build config 00:02:12.449 net/qede: not in enabled drivers build config 00:02:12.449 net/ring: not in enabled drivers build config 00:02:12.449 net/sfc: not in enabled drivers build config 00:02:12.449 net/softnic: not in enabled drivers build config 00:02:12.449 net/tap: not in enabled drivers build config 00:02:12.449 net/thunderx: not in enabled drivers build config 00:02:12.449 net/txgbe: not in enabled drivers build config 00:02:12.449 net/vdev_netvsc: not in enabled drivers build config 00:02:12.449 net/vhost: not in enabled drivers build config 00:02:12.449 net/virtio: not in enabled drivers build config 00:02:12.449 net/vmxnet3: not in enabled drivers build config 00:02:12.449 raw/cnxk_bphy: not in enabled drivers build config 00:02:12.449 raw/cnxk_gpio: not in enabled drivers build config 00:02:12.449 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:12.449 raw/ifpga: not in enabled drivers build config 00:02:12.449 raw/ntb: not in enabled drivers build config 00:02:12.449 raw/skeleton: not in enabled drivers build config 00:02:12.449 crypto/armv8: not in enabled drivers build config 00:02:12.449 crypto/bcmfs: not in enabled drivers build config 00:02:12.449 crypto/caam_jr: not in enabled drivers build config 00:02:12.449 crypto/ccp: not in enabled drivers build config 00:02:12.449 crypto/cnxk: not in enabled drivers build config 00:02:12.449 crypto/dpaa_sec: not in enabled drivers build config 00:02:12.449 crypto/dpaa2_sec: not in enabled drivers build config 00:02:12.449 crypto/ipsec_mb: not in enabled drivers build config 00:02:12.449 crypto/mlx5: not in enabled drivers build config 00:02:12.449 crypto/mvsam: not in enabled drivers build config 00:02:12.449 crypto/nitrox: not in enabled drivers build config 00:02:12.449 crypto/null: not in enabled drivers build config 00:02:12.449 crypto/octeontx: not in enabled drivers build config 00:02:12.449 crypto/openssl: not in enabled drivers build config 00:02:12.449 crypto/scheduler: not in enabled drivers build config 00:02:12.449 crypto/uadk: not in enabled drivers build config 00:02:12.449 crypto/virtio: not in enabled drivers build config 00:02:12.449 compress/isal: not in enabled drivers build config 00:02:12.449 compress/mlx5: not in enabled drivers build config 00:02:12.449 compress/octeontx: not in enabled drivers build config 00:02:12.449 compress/zlib: not in enabled drivers build config 00:02:12.449 regex/mlx5: not in enabled drivers build config 00:02:12.449 regex/cn9k: not in enabled drivers build config 00:02:12.449 ml/cnxk: not in enabled drivers build config 00:02:12.449 vdpa/ifc: not in enabled drivers build config 00:02:12.449 vdpa/mlx5: not in enabled drivers build config 00:02:12.449 vdpa/nfp: not in enabled drivers build config 00:02:12.449 vdpa/sfc: not in enabled drivers build config 00:02:12.449 event/cnxk: not in enabled drivers build config 00:02:12.449 event/dlb2: not in enabled drivers build config 00:02:12.449 event/dpaa: not in enabled drivers build config 00:02:12.449 event/dpaa2: not in enabled drivers build config 00:02:12.449 event/dsw: not in enabled drivers build config 00:02:12.449 event/opdl: not in enabled drivers build config 00:02:12.449 event/skeleton: not in enabled drivers build config 00:02:12.449 event/sw: not in enabled drivers build config 00:02:12.449 event/octeontx: not in enabled drivers build config 00:02:12.449 baseband/acc: not in enabled drivers build config 00:02:12.450 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:12.450 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:12.450 baseband/la12xx: not in enabled drivers build config 00:02:12.450 baseband/null: not in enabled drivers build config 00:02:12.450 baseband/turbo_sw: not in enabled drivers build config 00:02:12.450 gpu/cuda: not in enabled drivers build config 00:02:12.450 00:02:12.450 00:02:12.450 Build targets in project: 217 00:02:12.450 00:02:12.450 DPDK 23.11.0 00:02:12.450 00:02:12.450 User defined options 00:02:12.450 libdir : lib 00:02:12.450 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:12.450 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:12.450 c_link_args : 00:02:12.450 enable_docs : false 00:02:12.450 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:12.450 enable_kmods : false 00:02:12.450 machine : native 00:02:12.450 tests : false 00:02:12.450 00:02:12.450 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:12.450 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:12.716 02:23:43 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:02:12.716 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:12.716 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:12.716 [2/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:12.716 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:12.716 [4/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:12.980 [5/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:12.980 [6/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:12.980 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:12.980 [8/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:12.980 [9/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:12.980 [10/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:12.980 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:12.980 [12/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:12.980 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:12.980 [14/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:12.980 [15/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:12.980 [16/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:12.980 [17/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:12.980 [18/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:12.980 [19/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:12.980 [20/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:12.980 [21/707] Linking static target lib/librte_kvargs.a 00:02:12.980 [22/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:12.980 [23/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:12.980 [24/707] Linking static target lib/librte_pci.a 00:02:12.980 [25/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:13.240 [26/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:13.240 [27/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:13.240 [28/707] Linking static target lib/librte_log.a 00:02:13.240 [29/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:13.240 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:13.240 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:13.240 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:13.240 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:13.240 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:13.240 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:13.240 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:13.501 [37/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:13.501 [38/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.501 [39/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:13.501 [40/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.501 [41/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:13.501 [42/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:13.501 [43/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:13.501 [44/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:13.501 [45/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:13.501 [46/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:13.501 [47/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:13.501 [48/707] Linking static target lib/librte_meter.a 00:02:13.501 [49/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:13.501 [50/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:13.501 [51/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:13.501 [52/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:13.501 [53/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:13.501 [54/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:13.501 [55/707] Linking static target lib/librte_ring.a 00:02:13.501 [56/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:13.501 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:13.501 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:13.501 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:13.501 [60/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:13.501 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:13.501 [62/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:13.501 [63/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:13.501 [64/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:13.501 [65/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:13.501 [66/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:13.501 [67/707] Linking static target lib/librte_cmdline.a 00:02:13.769 [68/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:13.769 [69/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:13.769 [70/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:13.769 [71/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:13.769 [72/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:13.769 [73/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:13.769 [74/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:13.769 [75/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:13.769 [76/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:13.769 [77/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:13.769 [78/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:13.769 [79/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:13.769 [80/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:13.769 [81/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:13.769 [82/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:13.769 [83/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:13.769 [84/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:13.769 [85/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:13.769 [86/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:13.769 [87/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:13.769 [88/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:13.769 [89/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:13.769 [90/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:13.769 [91/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:13.769 [92/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:13.769 [93/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:13.769 [94/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:13.769 [95/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:13.769 [96/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:13.769 [97/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:13.769 [98/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:13.769 [99/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:14.030 [100/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.030 [101/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:14.030 [102/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:14.030 [103/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:14.030 [104/707] Linking static target lib/librte_net.a 00:02:14.030 [105/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:14.030 [106/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:14.030 [107/707] Linking static target lib/librte_metrics.a 00:02:14.030 [108/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:14.030 [109/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.030 [110/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:14.030 [111/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:14.030 [112/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:14.030 [113/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.030 [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:14.030 [115/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:14.030 [116/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:14.030 [117/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:14.030 [118/707] Linking static target lib/librte_cfgfile.a 00:02:14.030 [119/707] Linking target lib/librte_log.so.24.0 00:02:14.030 [120/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:14.030 [121/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:14.030 [122/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:14.030 [123/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:14.030 [124/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:14.290 [125/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:14.290 [126/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:14.290 [127/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:14.290 [128/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:14.290 [129/707] Linking static target lib/librte_mempool.a 00:02:14.290 [130/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:14.290 [131/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:14.290 [132/707] Linking static target lib/librte_bitratestats.a 00:02:14.290 [133/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:14.290 [134/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:14.290 [135/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:14.290 [136/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:14.290 [137/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:14.290 [138/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:14.290 [139/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:14.290 [140/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:14.290 [141/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:14.290 [142/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:14.290 [143/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:14.290 [144/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:14.290 [145/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:14.290 [146/707] Linking static target lib/librte_timer.a 00:02:14.290 [147/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:14.290 [148/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:14.558 [149/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:14.558 [150/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:14.558 [151/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:14.558 [152/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:14.558 [153/707] Linking target lib/librte_kvargs.so.24.0 00:02:14.558 [154/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:14.558 [155/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:14.558 [156/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:14.558 [157/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:14.558 [158/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.558 [159/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:14.558 [160/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:14.558 [161/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.558 [162/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.558 [163/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:14.558 [164/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:14.558 [165/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:14.558 [166/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:14.558 [167/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:14.558 [168/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:14.558 [169/707] Linking static target lib/librte_telemetry.a 00:02:14.558 [170/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:14.558 [171/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:14.558 [172/707] Linking static target lib/librte_compressdev.a 00:02:14.558 [173/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:14.558 [174/707] Linking static target lib/librte_jobstats.a 00:02:14.558 [175/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:14.822 [176/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:14.822 [177/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:14.822 [178/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:14.822 [179/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:14.822 [180/707] Linking static target lib/librte_dispatcher.a 00:02:14.822 [181/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:14.822 [182/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.822 [183/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:14.822 [184/707] Linking static target lib/librte_eal.a 00:02:14.822 [185/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:14.822 [186/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:14.822 [187/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:14.822 [188/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:14.822 [189/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:14.822 [190/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:14.822 [191/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:14.822 [192/707] Linking static target lib/librte_mbuf.a 00:02:14.822 [193/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:14.822 [194/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:14.822 [195/707] Linking static target lib/librte_bbdev.a 00:02:14.822 [196/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:14.822 [197/707] Linking static target lib/librte_dmadev.a 00:02:14.822 [198/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:14.822 [199/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:14.822 [200/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:14.823 [201/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:15.088 [202/707] Linking static target lib/librte_latencystats.a 00:02:15.088 [203/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:15.088 [204/707] Linking static target lib/librte_distributor.a 00:02:15.088 [205/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:15.088 [206/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.088 [207/707] Linking static target lib/librte_gpudev.a 00:02:15.088 [208/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:15.088 [209/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:15.088 [210/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.088 [211/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:15.088 [212/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:15.088 [213/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:15.088 [214/707] Linking static target lib/librte_gro.a 00:02:15.088 [215/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:15.088 [216/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:15.088 [217/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:15.088 [218/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:15.088 [219/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:15.088 [220/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:15.088 [221/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:15.088 [222/707] Linking static target lib/librte_rcu.a 00:02:15.088 [223/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:15.088 [224/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:15.088 [225/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:15.088 [226/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:15.088 [227/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:15.088 [228/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:15.088 [229/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:15.088 [230/707] Linking static target lib/librte_gso.a 00:02:15.088 [231/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:15.088 [232/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.088 [233/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:15.088 [234/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:15.353 [235/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:15.353 [236/707] Linking static target lib/librte_ip_frag.a 00:02:15.353 [237/707] Linking static target lib/librte_stack.a 00:02:15.353 [238/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:15.353 [239/707] Linking static target lib/librte_regexdev.a 00:02:15.353 [240/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.353 [241/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:15.353 [242/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:15.353 [243/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:15.353 [244/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.353 [245/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.353 [246/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:15.353 [247/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:15.353 [248/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:15.353 [249/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.616 [250/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:15.616 [251/707] Linking static target lib/librte_bpf.a 00:02:15.616 [252/707] Linking target lib/librte_telemetry.so.24.0 00:02:15.616 [253/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:15.616 [254/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:15.616 [255/707] Linking static target lib/librte_mldev.a 00:02:15.616 [256/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.616 [257/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:15.616 [258/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.616 [259/707] Linking static target lib/librte_rawdev.a 00:02:15.616 [260/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:15.616 [261/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.616 [262/707] Linking static target lib/librte_reorder.a 00:02:15.616 [263/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.616 [264/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:15.616 [265/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.616 [266/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:15.616 [267/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:15.616 [268/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:15.616 [269/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:15.616 [270/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:15.616 [271/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:15.616 [272/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:15.616 [273/707] Linking static target lib/librte_power.a 00:02:15.616 [274/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.616 [275/707] Linking static target lib/librte_security.a 00:02:15.616 [276/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:15.616 [277/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.616 [278/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:15.616 [279/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:15.616 [280/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:15.616 [281/707] Linking static target lib/librte_pcapng.a 00:02:15.616 [282/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:15.616 [283/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:15.616 [284/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:15.616 [285/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:15.886 [286/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:15.886 [287/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:15.886 [288/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.886 [289/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:15.886 [290/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:15.886 [291/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:15.886 [292/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:15.886 [293/707] Linking static target lib/librte_rib.a 00:02:15.886 [294/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:15.886 [295/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:15.886 [296/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.886 [297/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:15.886 [298/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.146 [299/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.146 [300/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:16.146 [301/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:16.146 [302/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:16.146 [303/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:16.146 [304/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:16.146 [305/707] Linking static target lib/librte_lpm.a 00:02:16.146 [306/707] Linking static target lib/librte_efd.a 00:02:16.146 [307/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:16.146 [308/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:16.146 [309/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:16.146 [310/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:16.147 [311/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.147 [312/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:16.147 [313/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.147 [314/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:16.147 [315/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:16.147 [316/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:16.412 [317/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:16.412 [318/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:16.412 [319/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:16.412 [320/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:16.412 [321/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:16.412 [322/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:16.412 [323/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:16.412 [324/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.412 [325/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:16.412 [326/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:16.412 [327/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.412 [328/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:16.412 [329/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.412 [330/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:16.412 [331/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:16.412 [332/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:16.412 [333/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:16.412 [334/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:16.675 [335/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:16.675 [336/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:16.675 [337/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:16.675 [338/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:16.675 [339/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:16.675 [340/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.675 [341/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:16.675 [342/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.675 [343/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:16.675 [344/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:16.675 [345/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:16.675 [346/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:16.675 [347/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:16.675 [348/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:16.675 [349/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:16.675 [350/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:16.675 [351/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:16.675 [352/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.675 [353/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.675 [354/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:16.675 [355/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:16.675 [356/707] Linking static target lib/librte_fib.a 00:02:16.942 [357/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.943 [358/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:16.943 [359/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:16.943 [360/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:16.943 [361/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:16.943 [362/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:16.943 [363/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:16.943 [364/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:16.943 [365/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:16.943 [366/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:16.943 [367/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:16.943 [368/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:16.943 [369/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:16.943 [370/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:16.943 [371/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:16.943 [372/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:16.943 [373/707] Linking static target lib/librte_pdump.a 00:02:16.943 [374/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:16.943 [375/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:17.206 [376/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:17.206 [377/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:17.206 [378/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:17.206 [379/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:17.206 [380/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:17.206 [381/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:17.206 [382/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:17.206 [383/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:17.206 [384/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:17.206 [385/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:17.206 [386/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:17.206 [387/707] Linking static target lib/librte_graph.a 00:02:17.470 [388/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:17.470 [389/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:17.470 [390/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:17.470 [391/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:17.470 [392/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:17.470 [393/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:17.470 [394/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:17.470 [395/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:17.470 [396/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.470 [397/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:17.470 [398/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:17.470 [399/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:17.470 [400/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.470 [401/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:17.470 [402/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:17.470 [403/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.470 [404/707] Linking static target drivers/librte_bus_vdev.a 00:02:17.470 [405/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:17.470 [406/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:17.470 [407/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.470 [408/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:17.470 [409/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:17.470 [410/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:17.734 [411/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:17.734 [412/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:17.734 [413/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:17.734 [414/707] Linking static target lib/librte_table.a 00:02:17.734 [415/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:17.734 [416/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:17.734 [417/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:17.734 [418/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:17.734 [419/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:17.734 [420/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:17.734 [421/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:17.734 [422/707] Linking static target lib/librte_cryptodev.a 00:02:17.734 [423/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:17.734 [424/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:17.734 [425/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:17.734 [426/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:17.734 [427/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:17.734 [428/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:17.734 [429/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:17.734 [430/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:17.999 [431/707] Linking static target lib/librte_sched.a 00:02:17.999 [432/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:17.999 [433/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:17.999 [434/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:17.999 [435/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.999 [436/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:17.999 [437/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:17.999 [438/707] Linking static target drivers/librte_bus_pci.a 00:02:17.999 [439/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.999 [440/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.999 [441/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.999 [442/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.999 [443/707] Linking static target lib/librte_hash.a 00:02:17.999 [444/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:18.264 [445/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:18.264 [446/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:18.264 [447/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:18.264 [448/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:18.264 [449/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:18.264 [450/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:18.264 [451/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:18.264 [452/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:18.264 [453/707] Linking static target lib/librte_ipsec.a 00:02:18.264 [454/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:18.264 [455/707] Linking static target lib/librte_pdcp.a 00:02:18.264 [456/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:18.264 [457/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:18.264 [458/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:18.264 [459/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:18.264 [460/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:18.264 [461/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:18.527 [462/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:18.527 [463/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:18.527 [464/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:18.527 [465/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:18.527 [466/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:18.527 [467/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:18.527 [468/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:18.527 [469/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:18.527 [470/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.527 [471/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:18.527 [472/707] Linking static target lib/librte_port.a 00:02:18.527 [473/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:18.527 [474/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:18.527 [475/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:18.527 [476/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:18.527 [477/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:18.527 [478/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:18.527 [479/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:18.527 [480/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:18.527 [481/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:18.527 [482/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:18.527 [483/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:18.527 [484/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:18.527 [485/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.527 [486/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:18.527 [487/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:18.527 [488/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:18.527 [489/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:18.786 [490/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:18.786 [491/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:18.786 [492/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:18.786 [493/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:18.786 [494/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:18.786 [495/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:18.786 [496/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:18.786 [497/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:18.786 [498/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:18.786 [499/707] Linking static target lib/librte_member.a 00:02:18.786 [500/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:18.786 [501/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.786 [502/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:18.786 [503/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:18.786 [504/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:18.786 [505/707] Linking static target drivers/librte_mempool_ring.a 00:02:18.786 [506/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:18.786 [507/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.786 [508/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:18.786 [509/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:18.786 [510/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.786 [511/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.786 [512/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.786 [513/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:18.786 [514/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:18.786 [515/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:19.046 [516/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:19.046 [517/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.046 [518/707] Linking static target lib/librte_node.a 00:02:19.046 [519/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:19.046 [520/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:19.046 [521/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:19.046 [522/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:19.046 [523/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:19.046 [524/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:19.046 [525/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:19.046 [526/707] Linking static target lib/librte_eventdev.a 00:02:19.046 [527/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:19.046 [528/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:19.306 [529/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:19.306 [530/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:19.306 [531/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.306 [532/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.306 [533/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:19.306 [534/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:19.306 [535/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:19.306 [536/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:19.306 [537/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:19.306 [538/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:19.306 [539/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:19.306 [540/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:19.306 [541/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:19.306 [542/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:19.306 [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:19.306 [544/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.306 [545/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:19.306 [546/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:19.306 [547/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:19.566 [548/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:19.566 [549/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:19.566 [550/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:19.566 [551/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:19.566 [552/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:19.566 [553/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:19.566 [554/707] Linking static target lib/acl/libavx2_tmp.a 00:02:19.566 [555/707] Linking static target lib/librte_acl.a 00:02:19.566 [556/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:19.566 [557/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:19.566 [558/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:19.566 [559/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:19.566 [560/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:19.566 [561/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:19.566 [562/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:19.825 [563/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.825 [564/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:19.825 [565/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:19.825 [566/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:19.825 [567/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:19.825 [568/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:20.085 [569/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.085 [570/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:20.085 [571/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:20.085 [572/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:20.085 [573/707] Linking static target lib/librte_ethdev.a 00:02:20.085 [574/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:20.655 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:20.655 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:20.914 [577/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:21.173 [578/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:21.173 [579/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:21.173 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:21.742 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:21.742 [582/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:22.001 [583/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:22.001 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:22.261 [585/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:22.261 [586/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:22.261 [587/707] Linking static target drivers/librte_net_i40e.a 00:02:22.261 [588/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.830 [589/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:23.089 [590/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.089 [591/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:24.027 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:25.933 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.933 [594/707] Linking target lib/librte_eal.so.24.0 00:02:25.933 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:25.933 [596/707] Linking target lib/librte_cfgfile.so.24.0 00:02:25.933 [597/707] Linking target lib/librte_ring.so.24.0 00:02:25.933 [598/707] Linking target lib/librte_pci.so.24.0 00:02:25.933 [599/707] Linking target lib/librte_meter.so.24.0 00:02:25.933 [600/707] Linking target lib/librte_timer.so.24.0 00:02:25.933 [601/707] Linking target lib/librte_dmadev.so.24.0 00:02:25.933 [602/707] Linking target lib/librte_stack.so.24.0 00:02:25.933 [603/707] Linking target lib/librte_jobstats.so.24.0 00:02:25.933 [604/707] Linking target lib/librte_rawdev.so.24.0 00:02:25.933 [605/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:25.933 [606/707] Linking target lib/librte_acl.so.24.0 00:02:25.933 [607/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:25.933 [608/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:25.933 [609/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:25.933 [610/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:25.933 [611/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:25.933 [612/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:25.933 [613/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:25.933 [614/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:25.933 [615/707] Linking target lib/librte_rcu.so.24.0 00:02:25.933 [616/707] Linking target lib/librte_mempool.so.24.0 00:02:26.193 [617/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:26.193 [618/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:26.193 [619/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:26.193 [620/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:26.193 [621/707] Linking target lib/librte_rib.so.24.0 00:02:26.193 [622/707] Linking target lib/librte_mbuf.so.24.0 00:02:26.193 [623/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:26.453 [624/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:26.453 [625/707] Linking target lib/librte_fib.so.24.0 00:02:26.453 [626/707] Linking target lib/librte_net.so.24.0 00:02:26.453 [627/707] Linking target lib/librte_bbdev.so.24.0 00:02:26.453 [628/707] Linking target lib/librte_compressdev.so.24.0 00:02:26.453 [629/707] Linking target lib/librte_gpudev.so.24.0 00:02:26.453 [630/707] Linking target lib/librte_distributor.so.24.0 00:02:26.453 [631/707] Linking target lib/librte_mldev.so.24.0 00:02:26.453 [632/707] Linking target lib/librte_reorder.so.24.0 00:02:26.453 [633/707] Linking target lib/librte_cryptodev.so.24.0 00:02:26.453 [634/707] Linking target lib/librte_regexdev.so.24.0 00:02:26.453 [635/707] Linking target lib/librte_sched.so.24.0 00:02:26.453 [636/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:26.453 [637/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:26.453 [638/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:26.453 [639/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:26.453 [640/707] Linking target lib/librte_cmdline.so.24.0 00:02:26.453 [641/707] Linking target lib/librte_hash.so.24.0 00:02:26.453 [642/707] Linking target lib/librte_security.so.24.0 00:02:26.712 [643/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:26.712 [644/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:26.712 [645/707] Linking target lib/librte_efd.so.24.0 00:02:26.712 [646/707] Linking target lib/librte_lpm.so.24.0 00:02:26.712 [647/707] Linking target lib/librte_member.so.24.0 00:02:26.712 [648/707] Linking target lib/librte_pdcp.so.24.0 00:02:26.712 [649/707] Linking target lib/librte_ipsec.so.24.0 00:02:26.972 [650/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:26.972 [651/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:27.541 [652/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.541 [653/707] Linking target lib/librte_ethdev.so.24.0 00:02:27.801 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:27.801 [655/707] Linking target lib/librte_metrics.so.24.0 00:02:27.801 [656/707] Linking target lib/librte_gro.so.24.0 00:02:27.801 [657/707] Linking target lib/librte_power.so.24.0 00:02:27.801 [658/707] Linking target lib/librte_gso.so.24.0 00:02:27.801 [659/707] Linking target lib/librte_pcapng.so.24.0 00:02:27.801 [660/707] Linking target lib/librte_ip_frag.so.24.0 00:02:27.801 [661/707] Linking target lib/librte_bpf.so.24.0 00:02:27.801 [662/707] Linking target lib/librte_eventdev.so.24.0 00:02:27.801 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:28.061 [664/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:28.061 [665/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:28.061 [666/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:28.061 [667/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:28.061 [668/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:28.061 [669/707] Linking target lib/librte_bitratestats.so.24.0 00:02:28.061 [670/707] Linking target lib/librte_latencystats.so.24.0 00:02:28.061 [671/707] Linking target lib/librte_graph.so.24.0 00:02:28.061 [672/707] Linking target lib/librte_pdump.so.24.0 00:02:28.061 [673/707] Linking target lib/librte_dispatcher.so.24.0 00:02:28.061 [674/707] Linking target lib/librte_port.so.24.0 00:02:28.061 [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:28.061 [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:28.321 [677/707] Linking target lib/librte_node.so.24.0 00:02:28.321 [678/707] Linking target lib/librte_table.so.24.0 00:02:28.321 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:30.860 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:30.860 [681/707] Linking static target lib/librte_pipeline.a 00:02:30.860 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:31.119 [683/707] Linking static target lib/librte_vhost.a 00:02:31.378 [684/707] Linking target app/dpdk-test-dma-perf 00:02:31.378 [685/707] Linking target app/dpdk-pdump 00:02:31.378 [686/707] Linking target app/dpdk-dumpcap 00:02:31.378 [687/707] Linking target app/dpdk-test-flow-perf 00:02:31.378 [688/707] Linking target app/dpdk-proc-info 00:02:31.378 [689/707] Linking target app/dpdk-test-fib 00:02:31.378 [690/707] Linking target app/dpdk-test-sad 00:02:31.378 [691/707] Linking target app/dpdk-test-cmdline 00:02:31.378 [692/707] Linking target app/dpdk-test-mldev 00:02:31.378 [693/707] Linking target app/dpdk-test-pipeline 00:02:31.378 [694/707] Linking target app/dpdk-test-gpudev 00:02:31.378 [695/707] Linking target app/dpdk-test-acl 00:02:31.378 [696/707] Linking target app/dpdk-test-regex 00:02:31.378 [697/707] Linking target app/dpdk-test-crypto-perf 00:02:31.378 [698/707] Linking target app/dpdk-test-security-perf 00:02:31.378 [699/707] Linking target app/dpdk-graph 00:02:31.378 [700/707] Linking target app/dpdk-test-compress-perf 00:02:31.378 [701/707] Linking target app/dpdk-test-bbdev 00:02:31.378 [702/707] Linking target app/dpdk-test-eventdev 00:02:31.638 [703/707] Linking target app/dpdk-testpmd 00:02:33.021 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.021 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:35.562 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.562 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:35.562 02:24:06 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:35.562 02:24:06 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:35.562 02:24:06 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:35.562 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:35.562 [0/1] Installing files. 00:02:35.825 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.825 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.826 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.827 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.828 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:35.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:35.830 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.830 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:35.831 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:36.092 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:36.092 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:36.092 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.092 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:36.092 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.092 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.093 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.094 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.356 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.357 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:36.358 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:36.358 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:36.358 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:36.358 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:36.358 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:36.358 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:36.358 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:36.358 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:36.358 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:36.358 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:36.358 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:36.358 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:36.358 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:36.358 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:36.358 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:36.358 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:36.358 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:36.358 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:36.358 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:36.358 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:36.358 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:36.358 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:36.358 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:36.358 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:36.358 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:36.358 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:36.358 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:36.358 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:36.358 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:36.358 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:36.358 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:36.358 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:36.358 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:36.358 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:36.358 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:36.358 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:36.358 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:36.358 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:36.358 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:36.358 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:36.358 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:36.358 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:36.358 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:36.359 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:36.359 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:36.359 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:36.359 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:36.359 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:36.359 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:36.359 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:36.359 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:36.359 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:36.359 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:36.359 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:36.359 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:36.359 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:36.359 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:36.359 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:36.359 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:36.359 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:36.359 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:36.359 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:36.359 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:36.359 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:36.359 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:36.359 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:36.359 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:36.359 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:36.359 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:36.359 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:36.359 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:36.359 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:36.359 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:36.359 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:36.359 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:36.359 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:36.359 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:36.359 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:36.359 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:36.359 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:36.359 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:36.359 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:36.359 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:36.359 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:36.359 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:36.359 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:36.359 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:36.359 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:36.359 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:36.359 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:36.359 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:36.359 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:36.359 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:36.359 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:36.359 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:36.359 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:36.359 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:36.359 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:36.359 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:36.359 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:36.359 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:36.359 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:36.359 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:36.359 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:36.359 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:36.359 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:36.359 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:36.359 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:36.359 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:36.359 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:36.359 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:36.359 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:36.359 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:36.359 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:36.359 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:36.359 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:36.359 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:36.359 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:36.359 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:36.359 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:36.359 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:36.359 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:36.359 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:36.359 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:36.359 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:36.359 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:36.359 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:36.359 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:36.359 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:36.359 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:36.359 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:36.359 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:36.359 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:36.359 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:36.359 02:24:06 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:36.359 02:24:06 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:36.359 00:02:36.359 real 0m29.779s 00:02:36.359 user 9m34.566s 00:02:36.359 sys 2m12.580s 00:02:36.359 02:24:06 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:36.359 02:24:06 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:36.359 ************************************ 00:02:36.359 END TEST build_native_dpdk 00:02:36.359 ************************************ 00:02:36.359 02:24:06 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:36.360 02:24:06 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:36.360 02:24:06 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:36.360 02:24:06 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:36.360 02:24:06 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:36.360 02:24:06 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:36.360 02:24:06 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:36.360 02:24:06 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:36.360 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:36.619 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:36.619 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:36.619 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:37.188 Using 'verbs' RDMA provider 00:02:49.975 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:02.197 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:02.197 Creating mk/config.mk...done. 00:03:02.197 Creating mk/cc.flags.mk...done. 00:03:02.197 Type 'make' to build. 00:03:02.197 02:24:32 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:02.197 02:24:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:02.197 02:24:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:02.197 02:24:32 -- common/autotest_common.sh@10 -- $ set +x 00:03:02.197 ************************************ 00:03:02.197 START TEST make 00:03:02.197 ************************************ 00:03:02.197 02:24:32 make -- common/autotest_common.sh@1129 -- $ make -j96 00:03:04.112 The Meson build system 00:03:04.112 Version: 1.5.0 00:03:04.112 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:04.112 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:04.112 Build type: native build 00:03:04.112 Project name: libvfio-user 00:03:04.112 Project version: 0.0.1 00:03:04.112 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:04.112 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:04.112 Host machine cpu family: x86_64 00:03:04.112 Host machine cpu: x86_64 00:03:04.112 Run-time dependency threads found: YES 00:03:04.112 Library dl found: YES 00:03:04.112 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:04.112 Run-time dependency json-c found: YES 0.17 00:03:04.112 Run-time dependency cmocka found: YES 1.1.7 00:03:04.112 Program pytest-3 found: NO 00:03:04.112 Program flake8 found: NO 00:03:04.112 Program misspell-fixer found: NO 00:03:04.112 Program restructuredtext-lint found: NO 00:03:04.112 Program valgrind found: YES (/usr/bin/valgrind) 00:03:04.112 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:04.112 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:04.112 Compiler for C supports arguments -Wwrite-strings: YES 00:03:04.112 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:04.112 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:04.112 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:04.112 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:04.112 Build targets in project: 8 00:03:04.112 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:04.112 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:04.112 00:03:04.112 libvfio-user 0.0.1 00:03:04.112 00:03:04.112 User defined options 00:03:04.112 buildtype : debug 00:03:04.112 default_library: shared 00:03:04.112 libdir : /usr/local/lib 00:03:04.112 00:03:04.112 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:05.049 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:05.049 [1/37] Compiling C object samples/null.p/null.c.o 00:03:05.049 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:05.049 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:05.049 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:05.049 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:05.049 [6/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:05.049 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:05.049 [8/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:05.049 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:05.049 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:05.049 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:05.049 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:05.049 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:05.049 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:05.049 [15/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:05.049 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:05.049 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:05.049 [18/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:05.049 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:05.049 [20/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:05.049 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:05.049 [22/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:05.049 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:05.049 [24/37] Compiling C object samples/client.p/client.c.o 00:03:05.049 [25/37] Compiling C object samples/server.p/server.c.o 00:03:05.049 [26/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:05.049 [27/37] Linking target samples/client 00:03:05.049 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:05.049 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:05.049 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:05.049 [31/37] Linking target test/unit_tests 00:03:05.309 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:05.309 [33/37] Linking target samples/gpio-pci-idio-16 00:03:05.309 [34/37] Linking target samples/server 00:03:05.309 [35/37] Linking target samples/null 00:03:05.309 [36/37] Linking target samples/lspci 00:03:05.309 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:05.309 INFO: autodetecting backend as ninja 00:03:05.309 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:05.309 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:05.569 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:05.569 ninja: no work to do. 00:03:32.150 CC lib/log/log.o 00:03:32.150 CC lib/log/log_flags.o 00:03:32.150 CC lib/log/log_deprecated.o 00:03:32.150 CC lib/ut_mock/mock.o 00:03:32.150 CC lib/ut/ut.o 00:03:32.418 LIB libspdk_ut.a 00:03:32.419 LIB libspdk_ut_mock.a 00:03:32.419 LIB libspdk_log.a 00:03:32.419 SO libspdk_ut.so.2.0 00:03:32.419 SO libspdk_ut_mock.so.6.0 00:03:32.419 SO libspdk_log.so.7.1 00:03:32.419 SYMLINK libspdk_ut.so 00:03:32.419 SYMLINK libspdk_ut_mock.so 00:03:32.419 SYMLINK libspdk_log.so 00:03:32.704 CC lib/ioat/ioat.o 00:03:32.974 CC lib/util/base64.o 00:03:32.974 CXX lib/trace_parser/trace.o 00:03:32.974 CC lib/util/bit_array.o 00:03:32.974 CC lib/util/cpuset.o 00:03:32.974 CC lib/util/crc16.o 00:03:32.974 CC lib/dma/dma.o 00:03:32.974 CC lib/util/crc32.o 00:03:32.974 CC lib/util/crc32c.o 00:03:32.974 CC lib/util/crc32_ieee.o 00:03:32.974 CC lib/util/crc64.o 00:03:32.974 CC lib/util/dif.o 00:03:32.974 CC lib/util/fd.o 00:03:32.974 CC lib/util/fd_group.o 00:03:32.974 CC lib/util/file.o 00:03:32.974 CC lib/util/hexlify.o 00:03:32.974 CC lib/util/iov.o 00:03:32.974 CC lib/util/math.o 00:03:32.974 CC lib/util/net.o 00:03:32.974 CC lib/util/pipe.o 00:03:32.974 CC lib/util/strerror_tls.o 00:03:32.974 CC lib/util/string.o 00:03:32.974 CC lib/util/uuid.o 00:03:32.974 CC lib/util/xor.o 00:03:32.974 CC lib/util/zipf.o 00:03:32.974 CC lib/util/md5.o 00:03:32.974 CC lib/vfio_user/host/vfio_user_pci.o 00:03:32.974 CC lib/vfio_user/host/vfio_user.o 00:03:32.974 LIB libspdk_dma.a 00:03:32.974 SO libspdk_dma.so.5.0 00:03:33.247 LIB libspdk_ioat.a 00:03:33.247 SO libspdk_ioat.so.7.0 00:03:33.247 SYMLINK libspdk_dma.so 00:03:33.247 SYMLINK libspdk_ioat.so 00:03:33.247 LIB libspdk_vfio_user.a 00:03:33.247 SO libspdk_vfio_user.so.5.0 00:03:33.247 LIB libspdk_util.a 00:03:33.247 SYMLINK libspdk_vfio_user.so 00:03:33.529 SO libspdk_util.so.10.1 00:03:33.529 SYMLINK libspdk_util.so 00:03:33.529 LIB libspdk_trace_parser.a 00:03:33.529 SO libspdk_trace_parser.so.6.0 00:03:33.801 SYMLINK libspdk_trace_parser.so 00:03:33.801 CC lib/json/json_parse.o 00:03:33.801 CC lib/json/json_util.o 00:03:33.801 CC lib/json/json_write.o 00:03:33.801 CC lib/conf/conf.o 00:03:33.801 CC lib/vmd/vmd.o 00:03:33.801 CC lib/idxd/idxd.o 00:03:33.801 CC lib/vmd/led.o 00:03:33.801 CC lib/idxd/idxd_user.o 00:03:33.801 CC lib/env_dpdk/env.o 00:03:33.801 CC lib/idxd/idxd_kernel.o 00:03:33.801 CC lib/env_dpdk/memory.o 00:03:33.801 CC lib/rdma_utils/rdma_utils.o 00:03:33.801 CC lib/env_dpdk/pci.o 00:03:33.801 CC lib/env_dpdk/init.o 00:03:33.801 CC lib/env_dpdk/threads.o 00:03:33.801 CC lib/env_dpdk/pci_ioat.o 00:03:33.801 CC lib/env_dpdk/pci_virtio.o 00:03:33.801 CC lib/env_dpdk/pci_vmd.o 00:03:33.801 CC lib/env_dpdk/pci_idxd.o 00:03:33.801 CC lib/env_dpdk/pci_event.o 00:03:33.801 CC lib/env_dpdk/sigbus_handler.o 00:03:33.801 CC lib/env_dpdk/pci_dpdk.o 00:03:33.801 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:33.801 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:34.060 LIB libspdk_conf.a 00:03:34.060 SO libspdk_conf.so.6.0 00:03:34.060 LIB libspdk_json.a 00:03:34.060 LIB libspdk_rdma_utils.a 00:03:34.319 SYMLINK libspdk_conf.so 00:03:34.319 SO libspdk_json.so.6.0 00:03:34.319 SO libspdk_rdma_utils.so.1.0 00:03:34.319 SYMLINK libspdk_rdma_utils.so 00:03:34.319 SYMLINK libspdk_json.so 00:03:34.319 LIB libspdk_idxd.a 00:03:34.319 SO libspdk_idxd.so.12.1 00:03:34.319 LIB libspdk_vmd.a 00:03:34.578 SO libspdk_vmd.so.6.0 00:03:34.578 SYMLINK libspdk_idxd.so 00:03:34.578 SYMLINK libspdk_vmd.so 00:03:34.579 CC lib/rdma_provider/common.o 00:03:34.579 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:34.579 CC lib/jsonrpc/jsonrpc_server.o 00:03:34.579 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:34.579 CC lib/jsonrpc/jsonrpc_client.o 00:03:34.579 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:34.837 LIB libspdk_rdma_provider.a 00:03:34.837 SO libspdk_rdma_provider.so.7.0 00:03:34.837 LIB libspdk_jsonrpc.a 00:03:34.837 SO libspdk_jsonrpc.so.6.0 00:03:34.837 SYMLINK libspdk_rdma_provider.so 00:03:34.837 LIB libspdk_env_dpdk.a 00:03:34.837 SYMLINK libspdk_jsonrpc.so 00:03:34.837 SO libspdk_env_dpdk.so.15.1 00:03:35.096 SYMLINK libspdk_env_dpdk.so 00:03:35.355 CC lib/rpc/rpc.o 00:03:35.355 LIB libspdk_rpc.a 00:03:35.615 SO libspdk_rpc.so.6.0 00:03:35.615 SYMLINK libspdk_rpc.so 00:03:35.874 CC lib/trace/trace.o 00:03:35.874 CC lib/trace/trace_flags.o 00:03:35.874 CC lib/trace/trace_rpc.o 00:03:35.874 CC lib/keyring/keyring.o 00:03:35.874 CC lib/keyring/keyring_rpc.o 00:03:35.874 CC lib/notify/notify.o 00:03:35.874 CC lib/notify/notify_rpc.o 00:03:36.133 LIB libspdk_notify.a 00:03:36.133 SO libspdk_notify.so.6.0 00:03:36.133 LIB libspdk_keyring.a 00:03:36.133 LIB libspdk_trace.a 00:03:36.133 SO libspdk_keyring.so.2.0 00:03:36.133 SO libspdk_trace.so.11.0 00:03:36.133 SYMLINK libspdk_notify.so 00:03:36.133 SYMLINK libspdk_keyring.so 00:03:36.133 SYMLINK libspdk_trace.so 00:03:36.702 CC lib/thread/thread.o 00:03:36.702 CC lib/thread/iobuf.o 00:03:36.702 CC lib/sock/sock.o 00:03:36.702 CC lib/sock/sock_rpc.o 00:03:36.962 LIB libspdk_sock.a 00:03:36.962 SO libspdk_sock.so.10.0 00:03:36.962 SYMLINK libspdk_sock.so 00:03:37.530 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:37.530 CC lib/nvme/nvme_ctrlr.o 00:03:37.530 CC lib/nvme/nvme_fabric.o 00:03:37.530 CC lib/nvme/nvme_ns_cmd.o 00:03:37.530 CC lib/nvme/nvme_ns.o 00:03:37.530 CC lib/nvme/nvme_pcie_common.o 00:03:37.530 CC lib/nvme/nvme_pcie.o 00:03:37.530 CC lib/nvme/nvme_qpair.o 00:03:37.530 CC lib/nvme/nvme.o 00:03:37.530 CC lib/nvme/nvme_quirks.o 00:03:37.530 CC lib/nvme/nvme_transport.o 00:03:37.530 CC lib/nvme/nvme_discovery.o 00:03:37.530 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:37.530 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:37.530 CC lib/nvme/nvme_tcp.o 00:03:37.530 CC lib/nvme/nvme_opal.o 00:03:37.530 CC lib/nvme/nvme_io_msg.o 00:03:37.530 CC lib/nvme/nvme_poll_group.o 00:03:37.530 CC lib/nvme/nvme_zns.o 00:03:37.530 CC lib/nvme/nvme_stubs.o 00:03:37.530 CC lib/nvme/nvme_auth.o 00:03:37.530 CC lib/nvme/nvme_cuse.o 00:03:37.530 CC lib/nvme/nvme_vfio_user.o 00:03:37.530 CC lib/nvme/nvme_rdma.o 00:03:37.788 LIB libspdk_thread.a 00:03:37.788 SO libspdk_thread.so.11.0 00:03:37.788 SYMLINK libspdk_thread.so 00:03:38.046 CC lib/accel/accel.o 00:03:38.046 CC lib/accel/accel_rpc.o 00:03:38.046 CC lib/accel/accel_sw.o 00:03:38.046 CC lib/fsdev/fsdev.o 00:03:38.046 CC lib/fsdev/fsdev_io.o 00:03:38.046 CC lib/fsdev/fsdev_rpc.o 00:03:38.046 CC lib/vfu_tgt/tgt_endpoint.o 00:03:38.046 CC lib/vfu_tgt/tgt_rpc.o 00:03:38.046 CC lib/virtio/virtio_vhost_user.o 00:03:38.046 CC lib/virtio/virtio.o 00:03:38.047 CC lib/init/json_config.o 00:03:38.047 CC lib/init/subsystem.o 00:03:38.047 CC lib/virtio/virtio_vfio_user.o 00:03:38.047 CC lib/virtio/virtio_pci.o 00:03:38.047 CC lib/blob/request.o 00:03:38.047 CC lib/blob/blobstore.o 00:03:38.047 CC lib/init/subsystem_rpc.o 00:03:38.047 CC lib/init/rpc.o 00:03:38.047 CC lib/blob/zeroes.o 00:03:38.047 CC lib/blob/blob_bs_dev.o 00:03:38.305 LIB libspdk_init.a 00:03:38.305 SO libspdk_init.so.6.0 00:03:38.305 LIB libspdk_virtio.a 00:03:38.305 LIB libspdk_vfu_tgt.a 00:03:38.563 SYMLINK libspdk_init.so 00:03:38.563 SO libspdk_virtio.so.7.0 00:03:38.563 SO libspdk_vfu_tgt.so.3.0 00:03:38.563 SYMLINK libspdk_virtio.so 00:03:38.563 SYMLINK libspdk_vfu_tgt.so 00:03:38.563 LIB libspdk_fsdev.a 00:03:38.563 SO libspdk_fsdev.so.2.0 00:03:38.822 SYMLINK libspdk_fsdev.so 00:03:38.822 CC lib/event/app.o 00:03:38.822 CC lib/event/reactor.o 00:03:38.822 CC lib/event/log_rpc.o 00:03:38.822 CC lib/event/app_rpc.o 00:03:38.822 CC lib/event/scheduler_static.o 00:03:38.822 LIB libspdk_accel.a 00:03:39.081 SO libspdk_accel.so.16.0 00:03:39.081 SYMLINK libspdk_accel.so 00:03:39.081 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:39.081 LIB libspdk_event.a 00:03:39.081 LIB libspdk_nvme.a 00:03:39.081 SO libspdk_event.so.14.0 00:03:39.340 SO libspdk_nvme.so.15.0 00:03:39.340 SYMLINK libspdk_event.so 00:03:39.340 CC lib/bdev/bdev.o 00:03:39.340 CC lib/bdev/bdev_rpc.o 00:03:39.340 CC lib/bdev/bdev_zone.o 00:03:39.340 CC lib/bdev/part.o 00:03:39.340 CC lib/bdev/scsi_nvme.o 00:03:39.340 SYMLINK libspdk_nvme.so 00:03:39.598 LIB libspdk_fuse_dispatcher.a 00:03:39.598 SO libspdk_fuse_dispatcher.so.1.0 00:03:39.598 SYMLINK libspdk_fuse_dispatcher.so 00:03:40.535 LIB libspdk_blob.a 00:03:40.535 SO libspdk_blob.so.12.0 00:03:40.535 SYMLINK libspdk_blob.so 00:03:40.794 CC lib/lvol/lvol.o 00:03:40.794 CC lib/blobfs/blobfs.o 00:03:40.794 CC lib/blobfs/tree.o 00:03:41.362 LIB libspdk_bdev.a 00:03:41.362 SO libspdk_bdev.so.17.0 00:03:41.362 LIB libspdk_blobfs.a 00:03:41.362 SYMLINK libspdk_bdev.so 00:03:41.362 SO libspdk_blobfs.so.11.0 00:03:41.362 LIB libspdk_lvol.a 00:03:41.362 SYMLINK libspdk_blobfs.so 00:03:41.362 SO libspdk_lvol.so.11.0 00:03:41.623 SYMLINK libspdk_lvol.so 00:03:41.623 CC lib/ublk/ublk.o 00:03:41.623 CC lib/ftl/ftl_core.o 00:03:41.623 CC lib/ublk/ublk_rpc.o 00:03:41.623 CC lib/ftl/ftl_init.o 00:03:41.623 CC lib/nvmf/ctrlr.o 00:03:41.623 CC lib/ftl/ftl_layout.o 00:03:41.623 CC lib/ftl/ftl_debug.o 00:03:41.623 CC lib/nvmf/ctrlr_discovery.o 00:03:41.623 CC lib/nvmf/ctrlr_bdev.o 00:03:41.623 CC lib/ftl/ftl_io.o 00:03:41.623 CC lib/scsi/dev.o 00:03:41.623 CC lib/nbd/nbd.o 00:03:41.623 CC lib/ftl/ftl_sb.o 00:03:41.623 CC lib/scsi/lun.o 00:03:41.623 CC lib/nvmf/subsystem.o 00:03:41.623 CC lib/ftl/ftl_l2p.o 00:03:41.623 CC lib/nvmf/nvmf.o 00:03:41.623 CC lib/nbd/nbd_rpc.o 00:03:41.623 CC lib/nvmf/nvmf_rpc.o 00:03:41.623 CC lib/ftl/ftl_l2p_flat.o 00:03:41.623 CC lib/scsi/port.o 00:03:41.623 CC lib/scsi/scsi.o 00:03:41.623 CC lib/nvmf/transport.o 00:03:41.623 CC lib/ftl/ftl_nv_cache.o 00:03:41.623 CC lib/scsi/scsi_bdev.o 00:03:41.623 CC lib/scsi/scsi_pr.o 00:03:41.623 CC lib/ftl/ftl_band.o 00:03:41.623 CC lib/nvmf/tcp.o 00:03:41.623 CC lib/nvmf/stubs.o 00:03:41.623 CC lib/ftl/ftl_band_ops.o 00:03:41.623 CC lib/nvmf/mdns_server.o 00:03:41.623 CC lib/scsi/scsi_rpc.o 00:03:41.623 CC lib/ftl/ftl_rq.o 00:03:41.623 CC lib/ftl/ftl_writer.o 00:03:41.623 CC lib/nvmf/vfio_user.o 00:03:41.623 CC lib/scsi/task.o 00:03:41.623 CC lib/nvmf/rdma.o 00:03:41.623 CC lib/ftl/ftl_reloc.o 00:03:41.623 CC lib/nvmf/auth.o 00:03:41.623 CC lib/ftl/ftl_l2p_cache.o 00:03:41.623 CC lib/ftl/ftl_p2l.o 00:03:41.623 CC lib/ftl/ftl_p2l_log.o 00:03:41.623 CC lib/ftl/mngt/ftl_mngt.o 00:03:41.882 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:41.882 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:41.882 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:41.882 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:41.882 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:41.882 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:41.882 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:41.882 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:41.882 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:41.882 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:41.882 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:41.882 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:41.882 CC lib/ftl/utils/ftl_conf.o 00:03:41.882 CC lib/ftl/utils/ftl_md.o 00:03:41.882 CC lib/ftl/utils/ftl_mempool.o 00:03:41.882 CC lib/ftl/utils/ftl_property.o 00:03:41.882 CC lib/ftl/utils/ftl_bitmap.o 00:03:41.882 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:41.882 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:41.882 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:41.882 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:41.882 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:41.882 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:41.882 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:41.882 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:41.882 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:41.882 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:41.882 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:41.882 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:41.882 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:41.882 CC lib/ftl/base/ftl_base_dev.o 00:03:41.882 CC lib/ftl/base/ftl_base_bdev.o 00:03:41.882 CC lib/ftl/ftl_trace.o 00:03:42.449 LIB libspdk_scsi.a 00:03:42.449 LIB libspdk_ublk.a 00:03:42.449 SO libspdk_scsi.so.9.0 00:03:42.449 SO libspdk_ublk.so.3.0 00:03:42.449 LIB libspdk_nbd.a 00:03:42.449 SO libspdk_nbd.so.7.0 00:03:42.449 SYMLINK libspdk_ublk.so 00:03:42.449 SYMLINK libspdk_scsi.so 00:03:42.449 SYMLINK libspdk_nbd.so 00:03:42.708 LIB libspdk_ftl.a 00:03:42.966 CC lib/iscsi/conn.o 00:03:42.966 CC lib/vhost/vhost.o 00:03:42.966 CC lib/iscsi/init_grp.o 00:03:42.966 CC lib/vhost/vhost_rpc.o 00:03:42.966 CC lib/vhost/vhost_scsi.o 00:03:42.966 CC lib/iscsi/iscsi.o 00:03:42.966 CC lib/iscsi/param.o 00:03:42.966 CC lib/vhost/vhost_blk.o 00:03:42.966 CC lib/iscsi/portal_grp.o 00:03:42.966 CC lib/vhost/rte_vhost_user.o 00:03:42.966 CC lib/iscsi/tgt_node.o 00:03:42.966 CC lib/iscsi/iscsi_subsystem.o 00:03:42.966 CC lib/iscsi/iscsi_rpc.o 00:03:42.966 CC lib/iscsi/task.o 00:03:42.966 SO libspdk_ftl.so.9.0 00:03:43.225 SYMLINK libspdk_ftl.so 00:03:43.484 LIB libspdk_nvmf.a 00:03:43.484 SO libspdk_nvmf.so.20.0 00:03:43.743 SYMLINK libspdk_nvmf.so 00:03:43.743 LIB libspdk_vhost.a 00:03:43.743 SO libspdk_vhost.so.8.0 00:03:43.743 SYMLINK libspdk_vhost.so 00:03:43.743 LIB libspdk_iscsi.a 00:03:44.002 SO libspdk_iscsi.so.8.0 00:03:44.002 SYMLINK libspdk_iscsi.so 00:03:44.571 CC module/vfu_device/vfu_virtio.o 00:03:44.571 CC module/env_dpdk/env_dpdk_rpc.o 00:03:44.571 CC module/vfu_device/vfu_virtio_blk.o 00:03:44.571 CC module/vfu_device/vfu_virtio_scsi.o 00:03:44.571 CC module/vfu_device/vfu_virtio_rpc.o 00:03:44.571 CC module/vfu_device/vfu_virtio_fs.o 00:03:44.829 CC module/accel/error/accel_error.o 00:03:44.829 CC module/keyring/file/keyring.o 00:03:44.829 CC module/accel/error/accel_error_rpc.o 00:03:44.829 CC module/keyring/file/keyring_rpc.o 00:03:44.829 LIB libspdk_env_dpdk_rpc.a 00:03:44.829 CC module/sock/posix/posix.o 00:03:44.829 CC module/fsdev/aio/fsdev_aio.o 00:03:44.829 CC module/accel/iaa/accel_iaa.o 00:03:44.829 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:44.829 CC module/fsdev/aio/linux_aio_mgr.o 00:03:44.829 CC module/accel/iaa/accel_iaa_rpc.o 00:03:44.829 CC module/blob/bdev/blob_bdev.o 00:03:44.829 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:44.829 CC module/accel/ioat/accel_ioat.o 00:03:44.829 CC module/accel/ioat/accel_ioat_rpc.o 00:03:44.829 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:44.829 CC module/keyring/linux/keyring.o 00:03:44.829 CC module/accel/dsa/accel_dsa.o 00:03:44.829 CC module/keyring/linux/keyring_rpc.o 00:03:44.829 CC module/scheduler/gscheduler/gscheduler.o 00:03:44.829 CC module/accel/dsa/accel_dsa_rpc.o 00:03:44.829 SO libspdk_env_dpdk_rpc.so.6.0 00:03:44.829 SYMLINK libspdk_env_dpdk_rpc.so 00:03:44.829 LIB libspdk_keyring_linux.a 00:03:44.829 LIB libspdk_keyring_file.a 00:03:44.829 LIB libspdk_scheduler_dpdk_governor.a 00:03:44.829 LIB libspdk_accel_error.a 00:03:44.829 LIB libspdk_scheduler_gscheduler.a 00:03:44.829 SO libspdk_keyring_linux.so.1.0 00:03:44.829 SO libspdk_keyring_file.so.2.0 00:03:44.829 LIB libspdk_scheduler_dynamic.a 00:03:44.829 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:44.829 LIB libspdk_accel_ioat.a 00:03:45.088 LIB libspdk_accel_iaa.a 00:03:45.088 SO libspdk_scheduler_gscheduler.so.4.0 00:03:45.088 SO libspdk_accel_error.so.2.0 00:03:45.088 SO libspdk_scheduler_dynamic.so.4.0 00:03:45.088 SO libspdk_accel_ioat.so.6.0 00:03:45.088 SYMLINK libspdk_keyring_linux.so 00:03:45.088 SO libspdk_accel_iaa.so.3.0 00:03:45.088 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:45.088 SYMLINK libspdk_keyring_file.so 00:03:45.088 SYMLINK libspdk_scheduler_gscheduler.so 00:03:45.088 LIB libspdk_blob_bdev.a 00:03:45.088 LIB libspdk_accel_dsa.a 00:03:45.088 SYMLINK libspdk_accel_error.so 00:03:45.088 SYMLINK libspdk_scheduler_dynamic.so 00:03:45.088 SYMLINK libspdk_accel_ioat.so 00:03:45.088 SYMLINK libspdk_accel_iaa.so 00:03:45.088 SO libspdk_blob_bdev.so.12.0 00:03:45.088 SO libspdk_accel_dsa.so.5.0 00:03:45.088 SYMLINK libspdk_blob_bdev.so 00:03:45.088 LIB libspdk_vfu_device.a 00:03:45.088 SYMLINK libspdk_accel_dsa.so 00:03:45.088 SO libspdk_vfu_device.so.3.0 00:03:45.347 SYMLINK libspdk_vfu_device.so 00:03:45.347 LIB libspdk_fsdev_aio.a 00:03:45.347 SO libspdk_fsdev_aio.so.1.0 00:03:45.347 LIB libspdk_sock_posix.a 00:03:45.347 SO libspdk_sock_posix.so.6.0 00:03:45.347 SYMLINK libspdk_fsdev_aio.so 00:03:45.607 SYMLINK libspdk_sock_posix.so 00:03:45.607 CC module/bdev/gpt/gpt.o 00:03:45.607 CC module/bdev/gpt/vbdev_gpt.o 00:03:45.607 CC module/blobfs/bdev/blobfs_bdev.o 00:03:45.607 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:45.607 CC module/bdev/malloc/bdev_malloc.o 00:03:45.607 CC module/bdev/lvol/vbdev_lvol.o 00:03:45.607 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:45.607 CC module/bdev/delay/vbdev_delay.o 00:03:45.607 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:45.607 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:45.607 CC module/bdev/passthru/vbdev_passthru.o 00:03:45.607 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:45.607 CC module/bdev/nvme/bdev_nvme.o 00:03:45.607 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:45.607 CC module/bdev/nvme/nvme_rpc.o 00:03:45.607 CC module/bdev/nvme/bdev_mdns_client.o 00:03:45.607 CC module/bdev/raid/bdev_raid.o 00:03:45.607 CC module/bdev/raid/bdev_raid_rpc.o 00:03:45.607 CC module/bdev/split/vbdev_split.o 00:03:45.607 CC module/bdev/raid/bdev_raid_sb.o 00:03:45.607 CC module/bdev/nvme/vbdev_opal.o 00:03:45.607 CC module/bdev/split/vbdev_split_rpc.o 00:03:45.607 CC module/bdev/raid/raid0.o 00:03:45.607 CC module/bdev/raid/raid1.o 00:03:45.607 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:45.607 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:45.607 CC module/bdev/raid/concat.o 00:03:45.607 CC module/bdev/aio/bdev_aio.o 00:03:45.607 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:45.607 CC module/bdev/aio/bdev_aio_rpc.o 00:03:45.607 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:45.607 CC module/bdev/ftl/bdev_ftl.o 00:03:45.607 CC module/bdev/error/vbdev_error.o 00:03:45.607 CC module/bdev/error/vbdev_error_rpc.o 00:03:45.607 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:45.607 CC module/bdev/iscsi/bdev_iscsi.o 00:03:45.607 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:45.607 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:45.607 CC module/bdev/null/bdev_null.o 00:03:45.607 CC module/bdev/null/bdev_null_rpc.o 00:03:45.607 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:45.607 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:45.866 LIB libspdk_blobfs_bdev.a 00:03:45.866 LIB libspdk_bdev_gpt.a 00:03:45.866 SO libspdk_blobfs_bdev.so.6.0 00:03:45.866 LIB libspdk_bdev_split.a 00:03:45.866 SO libspdk_bdev_gpt.so.6.0 00:03:45.866 SO libspdk_bdev_split.so.6.0 00:03:46.125 LIB libspdk_bdev_passthru.a 00:03:46.125 SYMLINK libspdk_blobfs_bdev.so 00:03:46.125 LIB libspdk_bdev_null.a 00:03:46.125 LIB libspdk_bdev_delay.a 00:03:46.125 LIB libspdk_bdev_error.a 00:03:46.125 SO libspdk_bdev_passthru.so.6.0 00:03:46.125 LIB libspdk_bdev_ftl.a 00:03:46.125 SYMLINK libspdk_bdev_gpt.so 00:03:46.125 SYMLINK libspdk_bdev_split.so 00:03:46.125 LIB libspdk_bdev_malloc.a 00:03:46.125 SO libspdk_bdev_null.so.6.0 00:03:46.125 SO libspdk_bdev_delay.so.6.0 00:03:46.125 SO libspdk_bdev_error.so.6.0 00:03:46.125 SO libspdk_bdev_ftl.so.6.0 00:03:46.125 SO libspdk_bdev_malloc.so.6.0 00:03:46.125 LIB libspdk_bdev_aio.a 00:03:46.125 LIB libspdk_bdev_zone_block.a 00:03:46.125 LIB libspdk_bdev_iscsi.a 00:03:46.125 SYMLINK libspdk_bdev_passthru.so 00:03:46.125 SO libspdk_bdev_aio.so.6.0 00:03:46.125 SO libspdk_bdev_zone_block.so.6.0 00:03:46.125 SYMLINK libspdk_bdev_null.so 00:03:46.125 SYMLINK libspdk_bdev_ftl.so 00:03:46.125 SYMLINK libspdk_bdev_delay.so 00:03:46.125 SYMLINK libspdk_bdev_malloc.so 00:03:46.125 SYMLINK libspdk_bdev_error.so 00:03:46.125 SO libspdk_bdev_iscsi.so.6.0 00:03:46.125 LIB libspdk_bdev_lvol.a 00:03:46.125 SYMLINK libspdk_bdev_aio.so 00:03:46.125 SYMLINK libspdk_bdev_zone_block.so 00:03:46.125 SO libspdk_bdev_lvol.so.6.0 00:03:46.125 SYMLINK libspdk_bdev_iscsi.so 00:03:46.125 LIB libspdk_bdev_virtio.a 00:03:46.384 SYMLINK libspdk_bdev_lvol.so 00:03:46.384 SO libspdk_bdev_virtio.so.6.0 00:03:46.384 SYMLINK libspdk_bdev_virtio.so 00:03:46.643 LIB libspdk_bdev_raid.a 00:03:46.643 SO libspdk_bdev_raid.so.6.0 00:03:46.643 SYMLINK libspdk_bdev_raid.so 00:03:47.580 LIB libspdk_bdev_nvme.a 00:03:47.580 SO libspdk_bdev_nvme.so.7.1 00:03:47.580 SYMLINK libspdk_bdev_nvme.so 00:03:48.518 CC module/event/subsystems/iobuf/iobuf.o 00:03:48.518 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:48.518 CC module/event/subsystems/scheduler/scheduler.o 00:03:48.518 CC module/event/subsystems/sock/sock.o 00:03:48.518 CC module/event/subsystems/vmd/vmd.o 00:03:48.518 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:48.518 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:48.518 CC module/event/subsystems/keyring/keyring.o 00:03:48.518 CC module/event/subsystems/fsdev/fsdev.o 00:03:48.518 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:48.518 LIB libspdk_event_fsdev.a 00:03:48.518 LIB libspdk_event_vfu_tgt.a 00:03:48.518 LIB libspdk_event_scheduler.a 00:03:48.518 LIB libspdk_event_keyring.a 00:03:48.518 LIB libspdk_event_iobuf.a 00:03:48.518 LIB libspdk_event_sock.a 00:03:48.518 LIB libspdk_event_vmd.a 00:03:48.518 LIB libspdk_event_vhost_blk.a 00:03:48.518 SO libspdk_event_vfu_tgt.so.3.0 00:03:48.518 SO libspdk_event_fsdev.so.1.0 00:03:48.518 SO libspdk_event_scheduler.so.4.0 00:03:48.518 SO libspdk_event_keyring.so.1.0 00:03:48.518 SO libspdk_event_sock.so.5.0 00:03:48.518 SO libspdk_event_iobuf.so.3.0 00:03:48.518 SO libspdk_event_vmd.so.6.0 00:03:48.518 SO libspdk_event_vhost_blk.so.3.0 00:03:48.518 SYMLINK libspdk_event_vfu_tgt.so 00:03:48.778 SYMLINK libspdk_event_fsdev.so 00:03:48.778 SYMLINK libspdk_event_iobuf.so 00:03:48.778 SYMLINK libspdk_event_sock.so 00:03:48.778 SYMLINK libspdk_event_keyring.so 00:03:48.778 SYMLINK libspdk_event_scheduler.so 00:03:48.778 SYMLINK libspdk_event_vmd.so 00:03:48.778 SYMLINK libspdk_event_vhost_blk.so 00:03:49.037 CC module/event/subsystems/accel/accel.o 00:03:49.037 LIB libspdk_event_accel.a 00:03:49.296 SO libspdk_event_accel.so.6.0 00:03:49.296 SYMLINK libspdk_event_accel.so 00:03:49.555 CC module/event/subsystems/bdev/bdev.o 00:03:49.814 LIB libspdk_event_bdev.a 00:03:49.814 SO libspdk_event_bdev.so.6.0 00:03:49.814 SYMLINK libspdk_event_bdev.so 00:03:50.382 CC module/event/subsystems/scsi/scsi.o 00:03:50.382 CC module/event/subsystems/ublk/ublk.o 00:03:50.382 CC module/event/subsystems/nbd/nbd.o 00:03:50.382 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:50.382 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:50.382 LIB libspdk_event_ublk.a 00:03:50.382 LIB libspdk_event_nbd.a 00:03:50.382 LIB libspdk_event_scsi.a 00:03:50.382 SO libspdk_event_ublk.so.3.0 00:03:50.382 SO libspdk_event_nbd.so.6.0 00:03:50.382 SO libspdk_event_scsi.so.6.0 00:03:50.382 LIB libspdk_event_nvmf.a 00:03:50.382 SYMLINK libspdk_event_ublk.so 00:03:50.382 SYMLINK libspdk_event_nbd.so 00:03:50.382 SYMLINK libspdk_event_scsi.so 00:03:50.382 SO libspdk_event_nvmf.so.6.0 00:03:50.642 SYMLINK libspdk_event_nvmf.so 00:03:50.901 CC module/event/subsystems/iscsi/iscsi.o 00:03:50.901 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:50.901 LIB libspdk_event_vhost_scsi.a 00:03:50.901 LIB libspdk_event_iscsi.a 00:03:50.901 SO libspdk_event_vhost_scsi.so.3.0 00:03:50.901 SO libspdk_event_iscsi.so.6.0 00:03:51.160 SYMLINK libspdk_event_vhost_scsi.so 00:03:51.160 SYMLINK libspdk_event_iscsi.so 00:03:51.160 SO libspdk.so.6.0 00:03:51.160 SYMLINK libspdk.so 00:03:51.738 CC app/trace_record/trace_record.o 00:03:51.738 CXX app/trace/trace.o 00:03:51.738 CC app/spdk_nvme_discover/discovery_aer.o 00:03:51.738 CC test/rpc_client/rpc_client_test.o 00:03:51.738 CC app/spdk_top/spdk_top.o 00:03:51.738 TEST_HEADER include/spdk/accel.h 00:03:51.738 TEST_HEADER include/spdk/accel_module.h 00:03:51.738 TEST_HEADER include/spdk/barrier.h 00:03:51.738 TEST_HEADER include/spdk/assert.h 00:03:51.738 TEST_HEADER include/spdk/base64.h 00:03:51.738 TEST_HEADER include/spdk/bdev_module.h 00:03:51.738 TEST_HEADER include/spdk/bdev.h 00:03:51.738 TEST_HEADER include/spdk/bdev_zone.h 00:03:51.738 TEST_HEADER include/spdk/bit_array.h 00:03:51.738 CC app/spdk_nvme_identify/identify.o 00:03:51.738 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:51.738 TEST_HEADER include/spdk/blob_bdev.h 00:03:51.738 TEST_HEADER include/spdk/blobfs.h 00:03:51.738 CC app/spdk_lspci/spdk_lspci.o 00:03:51.738 TEST_HEADER include/spdk/bit_pool.h 00:03:51.738 CC app/spdk_nvme_perf/perf.o 00:03:51.738 TEST_HEADER include/spdk/conf.h 00:03:51.738 TEST_HEADER include/spdk/blob.h 00:03:51.738 TEST_HEADER include/spdk/config.h 00:03:51.738 TEST_HEADER include/spdk/cpuset.h 00:03:51.738 TEST_HEADER include/spdk/crc16.h 00:03:51.738 TEST_HEADER include/spdk/crc64.h 00:03:51.738 TEST_HEADER include/spdk/crc32.h 00:03:51.738 TEST_HEADER include/spdk/dma.h 00:03:51.738 TEST_HEADER include/spdk/endian.h 00:03:51.738 TEST_HEADER include/spdk/dif.h 00:03:51.738 TEST_HEADER include/spdk/env_dpdk.h 00:03:51.738 TEST_HEADER include/spdk/env.h 00:03:51.738 TEST_HEADER include/spdk/fd_group.h 00:03:51.738 TEST_HEADER include/spdk/event.h 00:03:51.738 TEST_HEADER include/spdk/fd.h 00:03:51.738 TEST_HEADER include/spdk/fsdev.h 00:03:51.738 TEST_HEADER include/spdk/file.h 00:03:51.738 TEST_HEADER include/spdk/ftl.h 00:03:51.738 TEST_HEADER include/spdk/fsdev_module.h 00:03:51.738 TEST_HEADER include/spdk/gpt_spec.h 00:03:51.738 TEST_HEADER include/spdk/hexlify.h 00:03:51.738 TEST_HEADER include/spdk/histogram_data.h 00:03:51.738 TEST_HEADER include/spdk/idxd.h 00:03:51.738 TEST_HEADER include/spdk/init.h 00:03:51.738 TEST_HEADER include/spdk/idxd_spec.h 00:03:51.738 TEST_HEADER include/spdk/ioat_spec.h 00:03:51.738 TEST_HEADER include/spdk/iscsi_spec.h 00:03:51.738 TEST_HEADER include/spdk/ioat.h 00:03:51.738 TEST_HEADER include/spdk/jsonrpc.h 00:03:51.738 TEST_HEADER include/spdk/json.h 00:03:51.738 TEST_HEADER include/spdk/keyring_module.h 00:03:51.738 TEST_HEADER include/spdk/keyring.h 00:03:51.738 TEST_HEADER include/spdk/likely.h 00:03:51.738 TEST_HEADER include/spdk/log.h 00:03:51.738 TEST_HEADER include/spdk/lvol.h 00:03:51.738 TEST_HEADER include/spdk/md5.h 00:03:51.738 TEST_HEADER include/spdk/memory.h 00:03:51.738 TEST_HEADER include/spdk/mmio.h 00:03:51.738 CC app/spdk_dd/spdk_dd.o 00:03:51.738 TEST_HEADER include/spdk/net.h 00:03:51.738 TEST_HEADER include/spdk/nbd.h 00:03:51.738 TEST_HEADER include/spdk/notify.h 00:03:51.738 TEST_HEADER include/spdk/nvme.h 00:03:51.738 TEST_HEADER include/spdk/nvme_intel.h 00:03:51.738 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:51.738 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:51.738 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:51.738 TEST_HEADER include/spdk/nvme_spec.h 00:03:51.738 CC app/iscsi_tgt/iscsi_tgt.o 00:03:51.738 TEST_HEADER include/spdk/nvme_zns.h 00:03:51.738 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:51.738 TEST_HEADER include/spdk/nvmf.h 00:03:51.738 TEST_HEADER include/spdk/nvmf_spec.h 00:03:51.738 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:51.738 CC app/nvmf_tgt/nvmf_main.o 00:03:51.738 TEST_HEADER include/spdk/nvmf_transport.h 00:03:51.738 TEST_HEADER include/spdk/opal_spec.h 00:03:51.738 TEST_HEADER include/spdk/pci_ids.h 00:03:51.738 TEST_HEADER include/spdk/queue.h 00:03:51.738 TEST_HEADER include/spdk/opal.h 00:03:51.738 TEST_HEADER include/spdk/pipe.h 00:03:51.738 TEST_HEADER include/spdk/scheduler.h 00:03:51.738 TEST_HEADER include/spdk/rpc.h 00:03:51.738 TEST_HEADER include/spdk/scsi.h 00:03:51.738 TEST_HEADER include/spdk/scsi_spec.h 00:03:51.738 TEST_HEADER include/spdk/reduce.h 00:03:51.738 TEST_HEADER include/spdk/sock.h 00:03:51.738 TEST_HEADER include/spdk/string.h 00:03:51.738 TEST_HEADER include/spdk/thread.h 00:03:51.738 TEST_HEADER include/spdk/stdinc.h 00:03:51.738 TEST_HEADER include/spdk/trace.h 00:03:51.738 TEST_HEADER include/spdk/trace_parser.h 00:03:51.738 TEST_HEADER include/spdk/tree.h 00:03:51.738 TEST_HEADER include/spdk/ublk.h 00:03:51.738 TEST_HEADER include/spdk/util.h 00:03:51.738 TEST_HEADER include/spdk/uuid.h 00:03:51.738 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:51.738 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:51.738 TEST_HEADER include/spdk/vhost.h 00:03:51.738 TEST_HEADER include/spdk/version.h 00:03:51.738 TEST_HEADER include/spdk/vmd.h 00:03:51.738 TEST_HEADER include/spdk/zipf.h 00:03:51.738 TEST_HEADER include/spdk/xor.h 00:03:51.738 CXX test/cpp_headers/accel.o 00:03:51.738 CXX test/cpp_headers/accel_module.o 00:03:51.738 CXX test/cpp_headers/barrier.o 00:03:51.738 CXX test/cpp_headers/assert.o 00:03:51.738 CXX test/cpp_headers/base64.o 00:03:51.738 CXX test/cpp_headers/bdev.o 00:03:51.738 CXX test/cpp_headers/bdev_module.o 00:03:51.738 CXX test/cpp_headers/bit_array.o 00:03:51.738 CXX test/cpp_headers/bdev_zone.o 00:03:51.738 CXX test/cpp_headers/blobfs_bdev.o 00:03:51.738 CXX test/cpp_headers/bit_pool.o 00:03:51.738 CXX test/cpp_headers/blob_bdev.o 00:03:51.738 CXX test/cpp_headers/blob.o 00:03:51.738 CXX test/cpp_headers/blobfs.o 00:03:51.738 CXX test/cpp_headers/conf.o 00:03:51.738 CXX test/cpp_headers/config.o 00:03:51.738 CXX test/cpp_headers/crc16.o 00:03:51.738 CXX test/cpp_headers/cpuset.o 00:03:51.738 CC app/spdk_tgt/spdk_tgt.o 00:03:51.738 CXX test/cpp_headers/crc32.o 00:03:51.738 CXX test/cpp_headers/dif.o 00:03:51.738 CXX test/cpp_headers/crc64.o 00:03:51.738 CXX test/cpp_headers/dma.o 00:03:51.739 CXX test/cpp_headers/endian.o 00:03:51.739 CXX test/cpp_headers/env.o 00:03:51.739 CXX test/cpp_headers/event.o 00:03:51.739 CXX test/cpp_headers/env_dpdk.o 00:03:51.739 CXX test/cpp_headers/fd_group.o 00:03:51.739 CXX test/cpp_headers/file.o 00:03:51.739 CXX test/cpp_headers/fd.o 00:03:51.739 CXX test/cpp_headers/fsdev_module.o 00:03:51.739 CXX test/cpp_headers/fsdev.o 00:03:51.739 CXX test/cpp_headers/gpt_spec.o 00:03:51.739 CXX test/cpp_headers/histogram_data.o 00:03:51.739 CXX test/cpp_headers/ftl.o 00:03:51.739 CXX test/cpp_headers/hexlify.o 00:03:51.739 CXX test/cpp_headers/idxd.o 00:03:51.739 CXX test/cpp_headers/idxd_spec.o 00:03:51.739 CXX test/cpp_headers/init.o 00:03:51.739 CXX test/cpp_headers/ioat.o 00:03:51.739 CXX test/cpp_headers/ioat_spec.o 00:03:51.739 CXX test/cpp_headers/iscsi_spec.o 00:03:51.739 CXX test/cpp_headers/json.o 00:03:51.739 CXX test/cpp_headers/jsonrpc.o 00:03:51.739 CXX test/cpp_headers/likely.o 00:03:51.739 CXX test/cpp_headers/keyring.o 00:03:51.739 CXX test/cpp_headers/lvol.o 00:03:51.739 CXX test/cpp_headers/md5.o 00:03:51.739 CXX test/cpp_headers/keyring_module.o 00:03:51.739 CXX test/cpp_headers/mmio.o 00:03:51.739 CXX test/cpp_headers/memory.o 00:03:51.739 CXX test/cpp_headers/nbd.o 00:03:51.739 CXX test/cpp_headers/log.o 00:03:51.739 CXX test/cpp_headers/notify.o 00:03:51.739 CXX test/cpp_headers/nvme_intel.o 00:03:51.739 CXX test/cpp_headers/net.o 00:03:51.739 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:51.739 CXX test/cpp_headers/nvme.o 00:03:51.739 CXX test/cpp_headers/nvme_spec.o 00:03:51.739 CXX test/cpp_headers/nvme_zns.o 00:03:51.739 CXX test/cpp_headers/nvme_ocssd.o 00:03:51.739 CXX test/cpp_headers/nvmf_cmd.o 00:03:51.739 CXX test/cpp_headers/nvmf.o 00:03:51.739 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:51.739 CXX test/cpp_headers/nvmf_spec.o 00:03:51.739 CXX test/cpp_headers/nvmf_transport.o 00:03:51.739 CXX test/cpp_headers/opal.o 00:03:51.739 CXX test/cpp_headers/opal_spec.o 00:03:51.739 CXX test/cpp_headers/pci_ids.o 00:03:52.022 CC test/env/pci/pci_ut.o 00:03:52.022 CC examples/util/zipf/zipf.o 00:03:52.022 CC test/thread/poller_perf/poller_perf.o 00:03:52.022 CC examples/ioat/perf/perf.o 00:03:52.022 CC test/env/vtophys/vtophys.o 00:03:52.022 CC test/app/jsoncat/jsoncat.o 00:03:52.022 CC test/env/memory/memory_ut.o 00:03:52.022 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:52.022 CC test/app/histogram_perf/histogram_perf.o 00:03:52.022 CC test/app/bdev_svc/bdev_svc.o 00:03:52.022 CC test/app/stub/stub.o 00:03:52.022 CC examples/ioat/verify/verify.o 00:03:52.022 CC test/dma/test_dma/test_dma.o 00:03:52.022 CC app/fio/nvme/fio_plugin.o 00:03:52.022 CC app/fio/bdev/fio_plugin.o 00:03:52.290 LINK rpc_client_test 00:03:52.290 LINK spdk_nvme_discover 00:03:52.290 LINK nvmf_tgt 00:03:52.290 LINK spdk_lspci 00:03:52.290 LINK spdk_trace_record 00:03:52.290 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:52.290 LINK iscsi_tgt 00:03:52.290 CC test/env/mem_callbacks/mem_callbacks.o 00:03:52.290 CXX test/cpp_headers/pipe.o 00:03:52.290 CXX test/cpp_headers/queue.o 00:03:52.290 LINK poller_perf 00:03:52.290 LINK vtophys 00:03:52.290 CXX test/cpp_headers/reduce.o 00:03:52.290 CXX test/cpp_headers/rpc.o 00:03:52.290 CXX test/cpp_headers/scheduler.o 00:03:52.290 CXX test/cpp_headers/scsi.o 00:03:52.290 CXX test/cpp_headers/scsi_spec.o 00:03:52.290 CXX test/cpp_headers/sock.o 00:03:52.290 CXX test/cpp_headers/stdinc.o 00:03:52.290 LINK spdk_tgt 00:03:52.551 CXX test/cpp_headers/string.o 00:03:52.551 CXX test/cpp_headers/thread.o 00:03:52.551 CXX test/cpp_headers/trace.o 00:03:52.551 CXX test/cpp_headers/trace_parser.o 00:03:52.551 CXX test/cpp_headers/tree.o 00:03:52.551 CXX test/cpp_headers/ublk.o 00:03:52.551 CXX test/cpp_headers/util.o 00:03:52.551 CXX test/cpp_headers/uuid.o 00:03:52.551 CXX test/cpp_headers/version.o 00:03:52.551 CXX test/cpp_headers/vfio_user_pci.o 00:03:52.551 CXX test/cpp_headers/vfio_user_spec.o 00:03:52.551 CXX test/cpp_headers/vhost.o 00:03:52.551 CXX test/cpp_headers/vmd.o 00:03:52.551 LINK interrupt_tgt 00:03:52.551 CXX test/cpp_headers/xor.o 00:03:52.551 LINK env_dpdk_post_init 00:03:52.551 CXX test/cpp_headers/zipf.o 00:03:52.551 LINK bdev_svc 00:03:52.551 LINK ioat_perf 00:03:52.551 LINK jsoncat 00:03:52.551 LINK zipf 00:03:52.551 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:52.551 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:52.551 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:52.551 LINK histogram_perf 00:03:52.551 LINK stub 00:03:52.551 LINK spdk_trace 00:03:52.810 LINK pci_ut 00:03:52.810 LINK verify 00:03:52.810 LINK spdk_dd 00:03:53.069 CC test/event/event_perf/event_perf.o 00:03:53.069 CC test/event/reactor_perf/reactor_perf.o 00:03:53.069 LINK spdk_nvme 00:03:53.069 CC test/event/reactor/reactor.o 00:03:53.069 CC test/event/app_repeat/app_repeat.o 00:03:53.069 LINK nvme_fuzz 00:03:53.069 CC test/event/scheduler/scheduler.o 00:03:53.069 LINK spdk_bdev 00:03:53.069 LINK test_dma 00:03:53.069 LINK spdk_top 00:03:53.069 LINK vhost_fuzz 00:03:53.069 CC examples/vmd/led/led.o 00:03:53.069 LINK mem_callbacks 00:03:53.069 CC examples/vmd/lsvmd/lsvmd.o 00:03:53.069 CC examples/sock/hello_world/hello_sock.o 00:03:53.069 CC app/vhost/vhost.o 00:03:53.069 CC examples/idxd/perf/perf.o 00:03:53.069 LINK spdk_nvme_perf 00:03:53.069 LINK reactor 00:03:53.069 CC examples/thread/thread/thread_ex.o 00:03:53.069 LINK event_perf 00:03:53.069 LINK reactor_perf 00:03:53.069 LINK spdk_nvme_identify 00:03:53.069 LINK app_repeat 00:03:53.328 LINK led 00:03:53.328 LINK lsvmd 00:03:53.328 LINK scheduler 00:03:53.328 LINK vhost 00:03:53.328 LINK hello_sock 00:03:53.328 LINK thread 00:03:53.328 LINK idxd_perf 00:03:53.587 LINK memory_ut 00:03:53.587 CC test/nvme/err_injection/err_injection.o 00:03:53.587 CC test/nvme/sgl/sgl.o 00:03:53.587 CC test/nvme/overhead/overhead.o 00:03:53.587 CC test/nvme/reset/reset.o 00:03:53.587 CC test/nvme/connect_stress/connect_stress.o 00:03:53.587 CC test/nvme/aer/aer.o 00:03:53.587 CC test/nvme/startup/startup.o 00:03:53.587 CC test/nvme/fdp/fdp.o 00:03:53.587 CC test/nvme/simple_copy/simple_copy.o 00:03:53.587 CC test/nvme/compliance/nvme_compliance.o 00:03:53.587 CC test/nvme/fused_ordering/fused_ordering.o 00:03:53.587 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:53.587 CC test/nvme/e2edp/nvme_dp.o 00:03:53.587 CC test/nvme/cuse/cuse.o 00:03:53.587 CC test/nvme/reserve/reserve.o 00:03:53.587 CC test/nvme/boot_partition/boot_partition.o 00:03:53.587 CC test/accel/dif/dif.o 00:03:53.587 CC test/blobfs/mkfs/mkfs.o 00:03:53.846 CC test/lvol/esnap/esnap.o 00:03:53.846 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:53.846 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:53.846 CC examples/nvme/arbitration/arbitration.o 00:03:53.846 CC examples/nvme/hotplug/hotplug.o 00:03:53.846 CC examples/nvme/hello_world/hello_world.o 00:03:53.846 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:53.846 CC examples/nvme/abort/abort.o 00:03:53.846 CC examples/nvme/reconnect/reconnect.o 00:03:53.846 LINK connect_stress 00:03:53.846 LINK err_injection 00:03:53.846 LINK doorbell_aers 00:03:53.846 LINK startup 00:03:53.846 LINK boot_partition 00:03:53.846 LINK reserve 00:03:53.846 LINK fused_ordering 00:03:53.846 LINK simple_copy 00:03:53.846 LINK reset 00:03:53.846 LINK mkfs 00:03:53.846 LINK sgl 00:03:53.846 LINK overhead 00:03:53.846 CC examples/accel/perf/accel_perf.o 00:03:53.846 LINK aer 00:03:53.846 LINK nvme_dp 00:03:53.846 LINK fdp 00:03:53.846 LINK nvme_compliance 00:03:53.846 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:53.846 CC examples/blob/cli/blobcli.o 00:03:54.104 CC examples/blob/hello_world/hello_blob.o 00:03:54.104 LINK cmb_copy 00:03:54.104 LINK pmr_persistence 00:03:54.104 LINK iscsi_fuzz 00:03:54.104 LINK hotplug 00:03:54.104 LINK hello_world 00:03:54.104 LINK arbitration 00:03:54.104 LINK reconnect 00:03:54.104 LINK abort 00:03:54.363 LINK nvme_manage 00:03:54.363 LINK hello_blob 00:03:54.363 LINK hello_fsdev 00:03:54.363 LINK dif 00:03:54.363 LINK accel_perf 00:03:54.363 LINK blobcli 00:03:54.625 LINK cuse 00:03:54.901 CC test/bdev/bdevio/bdevio.o 00:03:54.901 CC examples/bdev/hello_world/hello_bdev.o 00:03:54.901 CC examples/bdev/bdevperf/bdevperf.o 00:03:55.160 LINK hello_bdev 00:03:55.160 LINK bdevio 00:03:55.418 LINK bdevperf 00:03:55.986 CC examples/nvmf/nvmf/nvmf.o 00:03:56.245 LINK nvmf 00:03:57.623 LINK esnap 00:03:57.623 00:03:57.623 real 0m55.476s 00:03:57.623 user 6m50.264s 00:03:57.623 sys 2m58.090s 00:03:57.623 02:25:28 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:57.623 02:25:28 make -- common/autotest_common.sh@10 -- $ set +x 00:03:57.623 ************************************ 00:03:57.623 END TEST make 00:03:57.623 ************************************ 00:03:57.623 02:25:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:57.623 02:25:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:57.623 02:25:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:57.623 02:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.623 02:25:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:57.623 02:25:28 -- pm/common@44 -- $ pid=675658 00:03:57.623 02:25:28 -- pm/common@50 -- $ kill -TERM 675658 00:03:57.623 02:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.623 02:25:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:57.623 02:25:28 -- pm/common@44 -- $ pid=675659 00:03:57.623 02:25:28 -- pm/common@50 -- $ kill -TERM 675659 00:03:57.623 02:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.623 02:25:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:57.623 02:25:28 -- pm/common@44 -- $ pid=675661 00:03:57.623 02:25:28 -- pm/common@50 -- $ kill -TERM 675661 00:03:57.623 02:25:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.623 02:25:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:57.623 02:25:28 -- pm/common@44 -- $ pid=675687 00:03:57.623 02:25:28 -- pm/common@50 -- $ sudo -E kill -TERM 675687 00:03:57.623 02:25:28 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:57.623 02:25:28 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:57.883 02:25:28 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:57.883 02:25:28 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:57.883 02:25:28 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:57.883 02:25:28 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:57.883 02:25:28 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.883 02:25:28 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.883 02:25:28 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.883 02:25:28 -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.883 02:25:28 -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.883 02:25:28 -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.883 02:25:28 -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.883 02:25:28 -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.883 02:25:28 -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.883 02:25:28 -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.883 02:25:28 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.883 02:25:28 -- scripts/common.sh@344 -- # case "$op" in 00:03:57.883 02:25:28 -- scripts/common.sh@345 -- # : 1 00:03:57.883 02:25:28 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.883 02:25:28 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.883 02:25:28 -- scripts/common.sh@365 -- # decimal 1 00:03:57.883 02:25:28 -- scripts/common.sh@353 -- # local d=1 00:03:57.883 02:25:28 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.883 02:25:28 -- scripts/common.sh@355 -- # echo 1 00:03:57.883 02:25:28 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.883 02:25:28 -- scripts/common.sh@366 -- # decimal 2 00:03:57.883 02:25:28 -- scripts/common.sh@353 -- # local d=2 00:03:57.883 02:25:28 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.883 02:25:28 -- scripts/common.sh@355 -- # echo 2 00:03:57.883 02:25:28 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.883 02:25:28 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.883 02:25:28 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.883 02:25:28 -- scripts/common.sh@368 -- # return 0 00:03:57.883 02:25:28 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.883 02:25:28 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:57.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.883 --rc genhtml_branch_coverage=1 00:03:57.883 --rc genhtml_function_coverage=1 00:03:57.883 --rc genhtml_legend=1 00:03:57.883 --rc geninfo_all_blocks=1 00:03:57.883 --rc geninfo_unexecuted_blocks=1 00:03:57.883 00:03:57.883 ' 00:03:57.883 02:25:28 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:57.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.883 --rc genhtml_branch_coverage=1 00:03:57.883 --rc genhtml_function_coverage=1 00:03:57.883 --rc genhtml_legend=1 00:03:57.883 --rc geninfo_all_blocks=1 00:03:57.883 --rc geninfo_unexecuted_blocks=1 00:03:57.883 00:03:57.883 ' 00:03:57.883 02:25:28 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:57.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.883 --rc genhtml_branch_coverage=1 00:03:57.883 --rc genhtml_function_coverage=1 00:03:57.883 --rc genhtml_legend=1 00:03:57.883 --rc geninfo_all_blocks=1 00:03:57.883 --rc geninfo_unexecuted_blocks=1 00:03:57.883 00:03:57.883 ' 00:03:57.883 02:25:28 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:57.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.884 --rc genhtml_branch_coverage=1 00:03:57.884 --rc genhtml_function_coverage=1 00:03:57.884 --rc genhtml_legend=1 00:03:57.884 --rc geninfo_all_blocks=1 00:03:57.884 --rc geninfo_unexecuted_blocks=1 00:03:57.884 00:03:57.884 ' 00:03:57.884 02:25:28 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:57.884 02:25:28 -- nvmf/common.sh@7 -- # uname -s 00:03:57.884 02:25:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:57.884 02:25:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:57.884 02:25:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:57.884 02:25:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:57.884 02:25:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:57.884 02:25:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:57.884 02:25:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:57.884 02:25:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:57.884 02:25:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:57.884 02:25:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:57.884 02:25:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:57.884 02:25:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:57.884 02:25:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:57.884 02:25:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:57.884 02:25:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:57.884 02:25:28 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:57.884 02:25:28 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.884 02:25:28 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:57.884 02:25:28 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:57.884 02:25:28 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.884 02:25:28 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.884 02:25:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.884 02:25:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.884 02:25:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.884 02:25:28 -- paths/export.sh@5 -- # export PATH 00:03:57.884 02:25:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.884 02:25:28 -- nvmf/common.sh@51 -- # : 0 00:03:57.884 02:25:28 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:57.884 02:25:28 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:57.884 02:25:28 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:57.884 02:25:28 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:57.884 02:25:28 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:57.884 02:25:28 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:57.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:57.884 02:25:28 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:57.884 02:25:28 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:57.884 02:25:28 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:57.884 02:25:28 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:57.884 02:25:28 -- spdk/autotest.sh@32 -- # uname -s 00:03:57.884 02:25:28 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:57.884 02:25:28 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:57.884 02:25:28 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.884 02:25:28 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:57.884 02:25:28 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:57.884 02:25:28 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:57.884 02:25:28 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:57.884 02:25:28 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:57.884 02:25:28 -- spdk/autotest.sh@48 -- # udevadm_pid=756314 00:03:57.884 02:25:28 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:57.884 02:25:28 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:57.884 02:25:28 -- pm/common@17 -- # local monitor 00:03:57.884 02:25:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.884 02:25:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.884 02:25:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.884 02:25:28 -- pm/common@21 -- # date +%s 00:03:57.884 02:25:28 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.884 02:25:28 -- pm/common@21 -- # date +%s 00:03:57.884 02:25:28 -- pm/common@25 -- # sleep 1 00:03:57.884 02:25:28 -- pm/common@21 -- # date +%s 00:03:57.884 02:25:28 -- pm/common@21 -- # date +%s 00:03:57.884 02:25:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734312328 00:03:57.884 02:25:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734312328 00:03:57.884 02:25:28 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734312328 00:03:57.884 02:25:28 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734312328 00:03:57.884 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734312328_collect-cpu-load.pm.log 00:03:57.884 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734312328_collect-vmstat.pm.log 00:03:57.884 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734312328_collect-cpu-temp.pm.log 00:03:57.884 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734312328_collect-bmc-pm.bmc.pm.log 00:03:58.823 02:25:29 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:58.823 02:25:29 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:58.823 02:25:29 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:58.823 02:25:29 -- common/autotest_common.sh@10 -- # set +x 00:03:58.823 02:25:29 -- spdk/autotest.sh@59 -- # create_test_list 00:03:58.823 02:25:29 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:58.823 02:25:29 -- common/autotest_common.sh@10 -- # set +x 00:03:59.082 02:25:29 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:59.082 02:25:29 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:59.082 02:25:29 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:59.082 02:25:29 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:59.082 02:25:29 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:59.082 02:25:29 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:59.082 02:25:29 -- common/autotest_common.sh@1457 -- # uname 00:03:59.082 02:25:29 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:59.082 02:25:29 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:59.082 02:25:29 -- common/autotest_common.sh@1477 -- # uname 00:03:59.082 02:25:29 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:59.082 02:25:29 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:59.082 02:25:29 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:59.082 lcov: LCOV version 1.15 00:03:59.082 02:25:29 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:17.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:17.178 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:23.751 02:25:54 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:23.751 02:25:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:23.751 02:25:54 -- common/autotest_common.sh@10 -- # set +x 00:04:23.751 02:25:54 -- spdk/autotest.sh@78 -- # rm -f 00:04:23.751 02:25:54 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.288 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:26.288 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:26.288 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:26.288 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:26.547 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:26.547 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:26.547 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:26.547 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:26.547 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:26.547 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:26.547 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:26.547 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:26.547 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:26.547 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:26.547 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:26.547 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:26.806 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:26.806 02:25:57 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:26.806 02:25:57 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:26.806 02:25:57 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:26.806 02:25:57 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:26.806 02:25:57 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:26.806 02:25:57 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:26.806 02:25:57 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:26.806 02:25:57 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:26.806 02:25:57 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:26.807 02:25:57 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:26.807 02:25:57 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:26.807 02:25:57 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.807 02:25:57 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:26.807 02:25:57 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:26.807 02:25:57 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:26.807 02:25:57 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:26.807 02:25:57 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:26.807 02:25:57 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:26.807 02:25:57 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:26.807 No valid GPT data, bailing 00:04:26.807 02:25:57 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:26.807 02:25:57 -- scripts/common.sh@394 -- # pt= 00:04:26.807 02:25:57 -- scripts/common.sh@395 -- # return 1 00:04:26.807 02:25:57 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:26.807 1+0 records in 00:04:26.807 1+0 records out 00:04:26.807 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00167837 s, 625 MB/s 00:04:26.807 02:25:57 -- spdk/autotest.sh@105 -- # sync 00:04:26.807 02:25:57 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:26.807 02:25:57 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:26.807 02:25:57 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:32.086 02:26:02 -- spdk/autotest.sh@111 -- # uname -s 00:04:32.086 02:26:02 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:32.086 02:26:02 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:32.086 02:26:02 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:35.389 Hugepages 00:04:35.389 node hugesize free / total 00:04:35.389 node0 1048576kB 0 / 0 00:04:35.389 node0 2048kB 0 / 0 00:04:35.389 node1 1048576kB 0 / 0 00:04:35.389 node1 2048kB 0 / 0 00:04:35.389 00:04:35.389 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:35.389 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:35.389 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:35.389 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:35.389 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:35.389 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:35.389 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:35.389 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:35.389 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:35.389 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:35.389 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:35.389 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:35.389 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:35.389 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:35.389 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:35.389 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:35.389 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:35.389 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:35.389 02:26:05 -- spdk/autotest.sh@117 -- # uname -s 00:04:35.389 02:26:05 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:35.389 02:26:05 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:35.389 02:26:05 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.929 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:37.929 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:37.929 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:37.929 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:37.929 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:37.929 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:37.929 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:38.189 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:38.189 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:38.189 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:38.189 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:38.189 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:38.189 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:38.189 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:38.189 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:38.189 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:39.127 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:39.127 02:26:09 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:40.066 02:26:10 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:40.066 02:26:10 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:40.066 02:26:10 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:40.066 02:26:10 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:40.066 02:26:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:40.066 02:26:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:40.066 02:26:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:40.066 02:26:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:40.066 02:26:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:40.066 02:26:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:40.066 02:26:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:40.066 02:26:10 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:43.358 Waiting for block devices as requested 00:04:43.358 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:43.358 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:43.358 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:43.358 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:43.358 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:43.358 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:43.358 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:43.617 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:43.617 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:43.617 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:43.617 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:43.876 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:43.876 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:43.876 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:44.135 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:44.135 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:44.135 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:44.394 02:26:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:44.394 02:26:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:44.394 02:26:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:44.394 02:26:14 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:44.394 02:26:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:44.394 02:26:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:44.395 02:26:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:44.395 02:26:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:44.395 02:26:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:44.395 02:26:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:44.395 02:26:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:44.395 02:26:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:44.395 02:26:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:44.395 02:26:14 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:44.395 02:26:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:44.395 02:26:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:44.395 02:26:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:44.395 02:26:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:44.395 02:26:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:44.395 02:26:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:44.395 02:26:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:44.395 02:26:14 -- common/autotest_common.sh@1543 -- # continue 00:04:44.395 02:26:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:44.395 02:26:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.395 02:26:14 -- common/autotest_common.sh@10 -- # set +x 00:04:44.395 02:26:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:44.395 02:26:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:44.395 02:26:14 -- common/autotest_common.sh@10 -- # set +x 00:04:44.395 02:26:14 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.684 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:47.684 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:48.253 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:48.253 02:26:18 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:48.253 02:26:18 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.253 02:26:18 -- common/autotest_common.sh@10 -- # set +x 00:04:48.253 02:26:18 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:48.253 02:26:18 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:48.253 02:26:18 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:48.253 02:26:18 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:48.253 02:26:18 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:48.253 02:26:18 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:48.253 02:26:18 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:48.253 02:26:18 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:48.253 02:26:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:48.253 02:26:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:48.253 02:26:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:48.253 02:26:18 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:48.253 02:26:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:48.512 02:26:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:48.512 02:26:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:48.512 02:26:18 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:48.512 02:26:18 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:48.512 02:26:18 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:48.512 02:26:18 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:48.512 02:26:18 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:48.512 02:26:18 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:48.512 02:26:18 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:48.512 02:26:18 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:48.512 02:26:18 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=770305 00:04:48.512 02:26:18 -- common/autotest_common.sh@1585 -- # waitforlisten 770305 00:04:48.512 02:26:18 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:48.512 02:26:18 -- common/autotest_common.sh@835 -- # '[' -z 770305 ']' 00:04:48.512 02:26:18 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.512 02:26:18 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.512 02:26:18 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.512 02:26:18 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.512 02:26:18 -- common/autotest_common.sh@10 -- # set +x 00:04:48.512 [2024-12-16 02:26:19.005190] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:48.513 [2024-12-16 02:26:19.005236] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770305 ] 00:04:48.513 [2024-12-16 02:26:19.061977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.513 [2024-12-16 02:26:19.085070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.771 02:26:19 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.771 02:26:19 -- common/autotest_common.sh@868 -- # return 0 00:04:48.771 02:26:19 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:48.771 02:26:19 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:48.771 02:26:19 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:52.058 nvme0n1 00:04:52.058 02:26:22 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:52.058 [2024-12-16 02:26:22.465110] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:52.058 [2024-12-16 02:26:22.465138] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:52.058 request: 00:04:52.058 { 00:04:52.058 "nvme_ctrlr_name": "nvme0", 00:04:52.058 "password": "test", 00:04:52.058 "method": "bdev_nvme_opal_revert", 00:04:52.058 "req_id": 1 00:04:52.058 } 00:04:52.058 Got JSON-RPC error response 00:04:52.058 response: 00:04:52.058 { 00:04:52.058 "code": -32603, 00:04:52.058 "message": "Internal error" 00:04:52.058 } 00:04:52.058 02:26:22 -- common/autotest_common.sh@1591 -- # true 00:04:52.058 02:26:22 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:52.058 02:26:22 -- common/autotest_common.sh@1595 -- # killprocess 770305 00:04:52.058 02:26:22 -- common/autotest_common.sh@954 -- # '[' -z 770305 ']' 00:04:52.058 02:26:22 -- common/autotest_common.sh@958 -- # kill -0 770305 00:04:52.058 02:26:22 -- common/autotest_common.sh@959 -- # uname 00:04:52.058 02:26:22 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.058 02:26:22 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770305 00:04:52.058 02:26:22 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.058 02:26:22 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.058 02:26:22 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770305' 00:04:52.058 killing process with pid 770305 00:04:52.058 02:26:22 -- common/autotest_common.sh@973 -- # kill 770305 00:04:52.058 02:26:22 -- common/autotest_common.sh@978 -- # wait 770305 00:04:53.963 02:26:24 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:53.963 02:26:24 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:53.963 02:26:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:53.963 02:26:24 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:53.963 02:26:24 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:53.963 02:26:24 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.963 02:26:24 -- common/autotest_common.sh@10 -- # set +x 00:04:53.963 02:26:24 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:53.963 02:26:24 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:53.963 02:26:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.963 02:26:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.963 02:26:24 -- common/autotest_common.sh@10 -- # set +x 00:04:53.963 ************************************ 00:04:53.963 START TEST env 00:04:53.963 ************************************ 00:04:53.963 02:26:24 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:53.963 * Looking for test storage... 00:04:53.963 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:53.963 02:26:24 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:53.963 02:26:24 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:53.963 02:26:24 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:53.963 02:26:24 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:53.963 02:26:24 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.963 02:26:24 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.963 02:26:24 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.963 02:26:24 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.963 02:26:24 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.963 02:26:24 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.963 02:26:24 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.963 02:26:24 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.963 02:26:24 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.963 02:26:24 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.963 02:26:24 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.963 02:26:24 env -- scripts/common.sh@344 -- # case "$op" in 00:04:53.963 02:26:24 env -- scripts/common.sh@345 -- # : 1 00:04:53.963 02:26:24 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.963 02:26:24 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.963 02:26:24 env -- scripts/common.sh@365 -- # decimal 1 00:04:53.963 02:26:24 env -- scripts/common.sh@353 -- # local d=1 00:04:53.963 02:26:24 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.963 02:26:24 env -- scripts/common.sh@355 -- # echo 1 00:04:53.963 02:26:24 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.963 02:26:24 env -- scripts/common.sh@366 -- # decimal 2 00:04:53.963 02:26:24 env -- scripts/common.sh@353 -- # local d=2 00:04:53.963 02:26:24 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.963 02:26:24 env -- scripts/common.sh@355 -- # echo 2 00:04:53.963 02:26:24 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.963 02:26:24 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.963 02:26:24 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.963 02:26:24 env -- scripts/common.sh@368 -- # return 0 00:04:53.963 02:26:24 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.963 02:26:24 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:53.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.963 --rc genhtml_branch_coverage=1 00:04:53.963 --rc genhtml_function_coverage=1 00:04:53.963 --rc genhtml_legend=1 00:04:53.963 --rc geninfo_all_blocks=1 00:04:53.963 --rc geninfo_unexecuted_blocks=1 00:04:53.963 00:04:53.963 ' 00:04:53.963 02:26:24 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:53.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.963 --rc genhtml_branch_coverage=1 00:04:53.963 --rc genhtml_function_coverage=1 00:04:53.963 --rc genhtml_legend=1 00:04:53.963 --rc geninfo_all_blocks=1 00:04:53.963 --rc geninfo_unexecuted_blocks=1 00:04:53.963 00:04:53.963 ' 00:04:53.963 02:26:24 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:53.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.963 --rc genhtml_branch_coverage=1 00:04:53.963 --rc genhtml_function_coverage=1 00:04:53.963 --rc genhtml_legend=1 00:04:53.963 --rc geninfo_all_blocks=1 00:04:53.963 --rc geninfo_unexecuted_blocks=1 00:04:53.963 00:04:53.963 ' 00:04:53.963 02:26:24 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:53.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.963 --rc genhtml_branch_coverage=1 00:04:53.963 --rc genhtml_function_coverage=1 00:04:53.963 --rc genhtml_legend=1 00:04:53.963 --rc geninfo_all_blocks=1 00:04:53.963 --rc geninfo_unexecuted_blocks=1 00:04:53.963 00:04:53.963 ' 00:04:53.963 02:26:24 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.963 02:26:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.963 02:26:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.963 02:26:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.963 ************************************ 00:04:53.963 START TEST env_memory 00:04:53.963 ************************************ 00:04:53.963 02:26:24 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:53.963 00:04:53.963 00:04:53.963 CUnit - A unit testing framework for C - Version 2.1-3 00:04:53.963 http://cunit.sourceforge.net/ 00:04:53.963 00:04:53.963 00:04:53.963 Suite: memory 00:04:53.963 Test: alloc and free memory map ...[2024-12-16 02:26:24.411047] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:53.963 passed 00:04:53.963 Test: mem map translation ...[2024-12-16 02:26:24.429528] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:53.963 [2024-12-16 02:26:24.429541] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:53.963 [2024-12-16 02:26:24.429592] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:53.963 [2024-12-16 02:26:24.429598] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:53.963 passed 00:04:53.963 Test: mem map registration ...[2024-12-16 02:26:24.468327] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:53.963 [2024-12-16 02:26:24.468341] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:53.963 passed 00:04:53.963 Test: mem map adjacent registrations ...passed 00:04:53.963 00:04:53.963 Run Summary: Type Total Ran Passed Failed Inactive 00:04:53.963 suites 1 1 n/a 0 0 00:04:53.963 tests 4 4 4 0 0 00:04:53.963 asserts 152 152 152 0 n/a 00:04:53.963 00:04:53.963 Elapsed time = 0.136 seconds 00:04:53.963 00:04:53.963 real 0m0.146s 00:04:53.963 user 0m0.138s 00:04:53.963 sys 0m0.008s 00:04:53.963 02:26:24 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.963 02:26:24 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:53.964 ************************************ 00:04:53.964 END TEST env_memory 00:04:53.964 ************************************ 00:04:53.964 02:26:24 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.964 02:26:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.964 02:26:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.964 02:26:24 env -- common/autotest_common.sh@10 -- # set +x 00:04:53.964 ************************************ 00:04:53.964 START TEST env_vtophys 00:04:53.964 ************************************ 00:04:53.964 02:26:24 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:53.964 EAL: lib.eal log level changed from notice to debug 00:04:53.964 EAL: Detected lcore 0 as core 0 on socket 0 00:04:53.964 EAL: Detected lcore 1 as core 1 on socket 0 00:04:53.964 EAL: Detected lcore 2 as core 2 on socket 0 00:04:53.964 EAL: Detected lcore 3 as core 3 on socket 0 00:04:53.964 EAL: Detected lcore 4 as core 4 on socket 0 00:04:53.964 EAL: Detected lcore 5 as core 5 on socket 0 00:04:53.964 EAL: Detected lcore 6 as core 6 on socket 0 00:04:53.964 EAL: Detected lcore 7 as core 8 on socket 0 00:04:53.964 EAL: Detected lcore 8 as core 9 on socket 0 00:04:53.964 EAL: Detected lcore 9 as core 10 on socket 0 00:04:53.964 EAL: Detected lcore 10 as core 11 on socket 0 00:04:53.964 EAL: Detected lcore 11 as core 12 on socket 0 00:04:53.964 EAL: Detected lcore 12 as core 13 on socket 0 00:04:53.964 EAL: Detected lcore 13 as core 16 on socket 0 00:04:53.964 EAL: Detected lcore 14 as core 17 on socket 0 00:04:53.964 EAL: Detected lcore 15 as core 18 on socket 0 00:04:53.964 EAL: Detected lcore 16 as core 19 on socket 0 00:04:53.964 EAL: Detected lcore 17 as core 20 on socket 0 00:04:53.964 EAL: Detected lcore 18 as core 21 on socket 0 00:04:53.964 EAL: Detected lcore 19 as core 25 on socket 0 00:04:53.964 EAL: Detected lcore 20 as core 26 on socket 0 00:04:53.964 EAL: Detected lcore 21 as core 27 on socket 0 00:04:53.964 EAL: Detected lcore 22 as core 28 on socket 0 00:04:53.964 EAL: Detected lcore 23 as core 29 on socket 0 00:04:53.964 EAL: Detected lcore 24 as core 0 on socket 1 00:04:53.964 EAL: Detected lcore 25 as core 1 on socket 1 00:04:53.964 EAL: Detected lcore 26 as core 2 on socket 1 00:04:53.964 EAL: Detected lcore 27 as core 3 on socket 1 00:04:53.964 EAL: Detected lcore 28 as core 4 on socket 1 00:04:53.964 EAL: Detected lcore 29 as core 5 on socket 1 00:04:53.964 EAL: Detected lcore 30 as core 6 on socket 1 00:04:53.964 EAL: Detected lcore 31 as core 8 on socket 1 00:04:53.964 EAL: Detected lcore 32 as core 9 on socket 1 00:04:53.964 EAL: Detected lcore 33 as core 10 on socket 1 00:04:53.964 EAL: Detected lcore 34 as core 11 on socket 1 00:04:53.964 EAL: Detected lcore 35 as core 12 on socket 1 00:04:53.964 EAL: Detected lcore 36 as core 13 on socket 1 00:04:53.964 EAL: Detected lcore 37 as core 16 on socket 1 00:04:53.964 EAL: Detected lcore 38 as core 17 on socket 1 00:04:53.964 EAL: Detected lcore 39 as core 18 on socket 1 00:04:53.964 EAL: Detected lcore 40 as core 19 on socket 1 00:04:53.964 EAL: Detected lcore 41 as core 20 on socket 1 00:04:53.964 EAL: Detected lcore 42 as core 21 on socket 1 00:04:53.964 EAL: Detected lcore 43 as core 25 on socket 1 00:04:53.964 EAL: Detected lcore 44 as core 26 on socket 1 00:04:53.964 EAL: Detected lcore 45 as core 27 on socket 1 00:04:53.964 EAL: Detected lcore 46 as core 28 on socket 1 00:04:53.964 EAL: Detected lcore 47 as core 29 on socket 1 00:04:53.964 EAL: Detected lcore 48 as core 0 on socket 0 00:04:53.964 EAL: Detected lcore 49 as core 1 on socket 0 00:04:53.964 EAL: Detected lcore 50 as core 2 on socket 0 00:04:53.964 EAL: Detected lcore 51 as core 3 on socket 0 00:04:53.964 EAL: Detected lcore 52 as core 4 on socket 0 00:04:53.964 EAL: Detected lcore 53 as core 5 on socket 0 00:04:53.964 EAL: Detected lcore 54 as core 6 on socket 0 00:04:53.964 EAL: Detected lcore 55 as core 8 on socket 0 00:04:53.964 EAL: Detected lcore 56 as core 9 on socket 0 00:04:53.964 EAL: Detected lcore 57 as core 10 on socket 0 00:04:53.964 EAL: Detected lcore 58 as core 11 on socket 0 00:04:53.964 EAL: Detected lcore 59 as core 12 on socket 0 00:04:53.964 EAL: Detected lcore 60 as core 13 on socket 0 00:04:53.964 EAL: Detected lcore 61 as core 16 on socket 0 00:04:53.964 EAL: Detected lcore 62 as core 17 on socket 0 00:04:53.964 EAL: Detected lcore 63 as core 18 on socket 0 00:04:53.964 EAL: Detected lcore 64 as core 19 on socket 0 00:04:53.964 EAL: Detected lcore 65 as core 20 on socket 0 00:04:53.964 EAL: Detected lcore 66 as core 21 on socket 0 00:04:53.964 EAL: Detected lcore 67 as core 25 on socket 0 00:04:53.964 EAL: Detected lcore 68 as core 26 on socket 0 00:04:53.964 EAL: Detected lcore 69 as core 27 on socket 0 00:04:53.964 EAL: Detected lcore 70 as core 28 on socket 0 00:04:53.964 EAL: Detected lcore 71 as core 29 on socket 0 00:04:53.964 EAL: Detected lcore 72 as core 0 on socket 1 00:04:53.964 EAL: Detected lcore 73 as core 1 on socket 1 00:04:53.964 EAL: Detected lcore 74 as core 2 on socket 1 00:04:53.964 EAL: Detected lcore 75 as core 3 on socket 1 00:04:53.964 EAL: Detected lcore 76 as core 4 on socket 1 00:04:53.964 EAL: Detected lcore 77 as core 5 on socket 1 00:04:53.964 EAL: Detected lcore 78 as core 6 on socket 1 00:04:53.964 EAL: Detected lcore 79 as core 8 on socket 1 00:04:53.964 EAL: Detected lcore 80 as core 9 on socket 1 00:04:53.964 EAL: Detected lcore 81 as core 10 on socket 1 00:04:53.964 EAL: Detected lcore 82 as core 11 on socket 1 00:04:53.964 EAL: Detected lcore 83 as core 12 on socket 1 00:04:53.964 EAL: Detected lcore 84 as core 13 on socket 1 00:04:53.964 EAL: Detected lcore 85 as core 16 on socket 1 00:04:53.964 EAL: Detected lcore 86 as core 17 on socket 1 00:04:53.964 EAL: Detected lcore 87 as core 18 on socket 1 00:04:53.964 EAL: Detected lcore 88 as core 19 on socket 1 00:04:53.964 EAL: Detected lcore 89 as core 20 on socket 1 00:04:53.964 EAL: Detected lcore 90 as core 21 on socket 1 00:04:53.964 EAL: Detected lcore 91 as core 25 on socket 1 00:04:53.964 EAL: Detected lcore 92 as core 26 on socket 1 00:04:53.964 EAL: Detected lcore 93 as core 27 on socket 1 00:04:53.964 EAL: Detected lcore 94 as core 28 on socket 1 00:04:53.964 EAL: Detected lcore 95 as core 29 on socket 1 00:04:53.964 EAL: Maximum logical cores by configuration: 128 00:04:53.964 EAL: Detected CPU lcores: 96 00:04:53.964 EAL: Detected NUMA nodes: 2 00:04:53.964 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:53.964 EAL: Detected shared linkage of DPDK 00:04:53.964 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:53.964 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:53.964 EAL: Registered [vdev] bus. 00:04:53.964 EAL: bus.vdev log level changed from disabled to notice 00:04:53.964 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:53.964 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:53.964 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:53.964 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:53.964 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:53.964 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:53.964 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:53.964 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:53.964 EAL: No shared files mode enabled, IPC will be disabled 00:04:54.223 EAL: No shared files mode enabled, IPC is disabled 00:04:54.223 EAL: Bus pci wants IOVA as 'DC' 00:04:54.223 EAL: Bus vdev wants IOVA as 'DC' 00:04:54.223 EAL: Buses did not request a specific IOVA mode. 00:04:54.223 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:54.223 EAL: Selected IOVA mode 'VA' 00:04:54.223 EAL: Probing VFIO support... 00:04:54.223 EAL: IOMMU type 1 (Type 1) is supported 00:04:54.223 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:54.223 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:54.223 EAL: VFIO support initialized 00:04:54.223 EAL: Ask a virtual area of 0x2e000 bytes 00:04:54.223 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:54.223 EAL: Setting up physically contiguous memory... 00:04:54.223 EAL: Setting maximum number of open files to 524288 00:04:54.223 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:54.223 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:54.223 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:54.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.223 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:54.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.223 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.223 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:54.223 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:54.223 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.223 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:54.223 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.224 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:54.224 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:54.224 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.224 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:54.224 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.224 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:54.224 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:54.224 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.224 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:54.224 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:54.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.224 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:54.224 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:54.224 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:54.224 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.224 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:54.224 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.224 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:54.224 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:54.224 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.224 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:54.224 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.224 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:54.224 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:54.224 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.224 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:54.224 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.224 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:54.224 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:54.224 EAL: Ask a virtual area of 0x61000 bytes 00:04:54.224 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:54.224 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:54.224 EAL: Ask a virtual area of 0x400000000 bytes 00:04:54.224 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:54.224 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:54.224 EAL: Hugepages will be freed exactly as allocated. 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: TSC frequency is ~2100000 KHz 00:04:54.224 EAL: Main lcore 0 is ready (tid=7f33beaeba00;cpuset=[0]) 00:04:54.224 EAL: Trying to obtain current memory policy. 00:04:54.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.224 EAL: Restoring previous memory policy: 0 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was expanded by 2MB 00:04:54.224 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:54.224 EAL: probe driver: 8086:37d2 net_i40e 00:04:54.224 EAL: Not managed by a supported kernel driver, skipped 00:04:54.224 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:54.224 EAL: probe driver: 8086:37d2 net_i40e 00:04:54.224 EAL: Not managed by a supported kernel driver, skipped 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:54.224 EAL: Mem event callback 'spdk:(nil)' registered 00:04:54.224 00:04:54.224 00:04:54.224 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.224 http://cunit.sourceforge.net/ 00:04:54.224 00:04:54.224 00:04:54.224 Suite: components_suite 00:04:54.224 Test: vtophys_malloc_test ...passed 00:04:54.224 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:54.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.224 EAL: Restoring previous memory policy: 4 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was expanded by 4MB 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was shrunk by 4MB 00:04:54.224 EAL: Trying to obtain current memory policy. 00:04:54.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.224 EAL: Restoring previous memory policy: 4 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was expanded by 6MB 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was shrunk by 6MB 00:04:54.224 EAL: Trying to obtain current memory policy. 00:04:54.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.224 EAL: Restoring previous memory policy: 4 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was expanded by 10MB 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was shrunk by 10MB 00:04:54.224 EAL: Trying to obtain current memory policy. 00:04:54.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.224 EAL: Restoring previous memory policy: 4 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was expanded by 18MB 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was shrunk by 18MB 00:04:54.224 EAL: Trying to obtain current memory policy. 00:04:54.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.224 EAL: Restoring previous memory policy: 4 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was expanded by 34MB 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was shrunk by 34MB 00:04:54.224 EAL: Trying to obtain current memory policy. 00:04:54.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.224 EAL: Restoring previous memory policy: 4 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was expanded by 66MB 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was shrunk by 66MB 00:04:54.224 EAL: Trying to obtain current memory policy. 00:04:54.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.224 EAL: Restoring previous memory policy: 4 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was expanded by 130MB 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was shrunk by 130MB 00:04:54.224 EAL: Trying to obtain current memory policy. 00:04:54.224 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.224 EAL: Restoring previous memory policy: 4 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.224 EAL: request: mp_malloc_sync 00:04:54.224 EAL: No shared files mode enabled, IPC is disabled 00:04:54.224 EAL: Heap on socket 0 was expanded by 258MB 00:04:54.224 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.483 EAL: request: mp_malloc_sync 00:04:54.483 EAL: No shared files mode enabled, IPC is disabled 00:04:54.483 EAL: Heap on socket 0 was shrunk by 258MB 00:04:54.483 EAL: Trying to obtain current memory policy. 00:04:54.483 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.483 EAL: Restoring previous memory policy: 4 00:04:54.483 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.483 EAL: request: mp_malloc_sync 00:04:54.483 EAL: No shared files mode enabled, IPC is disabled 00:04:54.483 EAL: Heap on socket 0 was expanded by 514MB 00:04:54.483 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.742 EAL: request: mp_malloc_sync 00:04:54.742 EAL: No shared files mode enabled, IPC is disabled 00:04:54.742 EAL: Heap on socket 0 was shrunk by 514MB 00:04:54.742 EAL: Trying to obtain current memory policy. 00:04:54.742 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:54.742 EAL: Restoring previous memory policy: 4 00:04:54.742 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.742 EAL: request: mp_malloc_sync 00:04:54.742 EAL: No shared files mode enabled, IPC is disabled 00:04:54.742 EAL: Heap on socket 0 was expanded by 1026MB 00:04:55.000 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.260 EAL: request: mp_malloc_sync 00:04:55.260 EAL: No shared files mode enabled, IPC is disabled 00:04:55.260 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:55.260 passed 00:04:55.260 00:04:55.260 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.260 suites 1 1 n/a 0 0 00:04:55.260 tests 2 2 2 0 0 00:04:55.260 asserts 497 497 497 0 n/a 00:04:55.260 00:04:55.260 Elapsed time = 0.963 seconds 00:04:55.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:55.260 EAL: request: mp_malloc_sync 00:04:55.260 EAL: No shared files mode enabled, IPC is disabled 00:04:55.260 EAL: Heap on socket 0 was shrunk by 2MB 00:04:55.260 EAL: No shared files mode enabled, IPC is disabled 00:04:55.260 EAL: No shared files mode enabled, IPC is disabled 00:04:55.260 EAL: No shared files mode enabled, IPC is disabled 00:04:55.260 00:04:55.260 real 0m1.092s 00:04:55.260 user 0m0.640s 00:04:55.260 sys 0m0.426s 00:04:55.260 02:26:25 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.260 02:26:25 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:55.260 ************************************ 00:04:55.260 END TEST env_vtophys 00:04:55.260 ************************************ 00:04:55.260 02:26:25 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:55.260 02:26:25 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.260 02:26:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.260 02:26:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.260 ************************************ 00:04:55.260 START TEST env_pci 00:04:55.260 ************************************ 00:04:55.260 02:26:25 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:55.260 00:04:55.260 00:04:55.260 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.260 http://cunit.sourceforge.net/ 00:04:55.260 00:04:55.260 00:04:55.260 Suite: pci 00:04:55.260 Test: pci_hook ...[2024-12-16 02:26:25.771711] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 771540 has claimed it 00:04:55.260 EAL: Cannot find device (10000:00:01.0) 00:04:55.260 EAL: Failed to attach device on primary process 00:04:55.260 passed 00:04:55.260 00:04:55.260 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.260 suites 1 1 n/a 0 0 00:04:55.260 tests 1 1 1 0 0 00:04:55.260 asserts 25 25 25 0 n/a 00:04:55.260 00:04:55.260 Elapsed time = 0.030 seconds 00:04:55.260 00:04:55.260 real 0m0.050s 00:04:55.260 user 0m0.014s 00:04:55.260 sys 0m0.035s 00:04:55.260 02:26:25 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.260 02:26:25 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:55.260 ************************************ 00:04:55.260 END TEST env_pci 00:04:55.260 ************************************ 00:04:55.260 02:26:25 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:55.260 02:26:25 env -- env/env.sh@15 -- # uname 00:04:55.260 02:26:25 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:55.260 02:26:25 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:55.260 02:26:25 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.260 02:26:25 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:55.260 02:26:25 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.260 02:26:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.260 ************************************ 00:04:55.260 START TEST env_dpdk_post_init 00:04:55.260 ************************************ 00:04:55.260 02:26:25 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:55.260 EAL: Detected CPU lcores: 96 00:04:55.260 EAL: Detected NUMA nodes: 2 00:04:55.260 EAL: Detected shared linkage of DPDK 00:04:55.260 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.519 EAL: Selected IOVA mode 'VA' 00:04:55.519 EAL: VFIO support initialized 00:04:55.519 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.519 EAL: Using IOMMU type 1 (Type 1) 00:04:55.519 EAL: Ignore mapping IO port bar(1) 00:04:55.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:55.519 EAL: Ignore mapping IO port bar(1) 00:04:55.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:55.519 EAL: Ignore mapping IO port bar(1) 00:04:55.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:55.519 EAL: Ignore mapping IO port bar(1) 00:04:55.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:55.519 EAL: Ignore mapping IO port bar(1) 00:04:55.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:55.519 EAL: Ignore mapping IO port bar(1) 00:04:55.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:55.519 EAL: Ignore mapping IO port bar(1) 00:04:55.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:55.519 EAL: Ignore mapping IO port bar(1) 00:04:55.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:56.457 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:56.457 EAL: Ignore mapping IO port bar(1) 00:04:56.457 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:56.457 EAL: Ignore mapping IO port bar(1) 00:04:56.457 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:56.457 EAL: Ignore mapping IO port bar(1) 00:04:56.457 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:56.457 EAL: Ignore mapping IO port bar(1) 00:04:56.457 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:56.457 EAL: Ignore mapping IO port bar(1) 00:04:56.457 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:56.457 EAL: Ignore mapping IO port bar(1) 00:04:56.457 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:56.457 EAL: Ignore mapping IO port bar(1) 00:04:56.457 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:56.457 EAL: Ignore mapping IO port bar(1) 00:04:56.457 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:59.855 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:59.855 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:59.855 Starting DPDK initialization... 00:04:59.855 Starting SPDK post initialization... 00:04:59.855 SPDK NVMe probe 00:04:59.855 Attaching to 0000:5e:00.0 00:04:59.855 Attached to 0000:5e:00.0 00:04:59.855 Cleaning up... 00:04:59.855 00:04:59.855 real 0m4.348s 00:04:59.855 user 0m3.258s 00:04:59.855 sys 0m0.159s 00:04:59.855 02:26:30 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.855 02:26:30 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.855 ************************************ 00:04:59.855 END TEST env_dpdk_post_init 00:04:59.855 ************************************ 00:04:59.855 02:26:30 env -- env/env.sh@26 -- # uname 00:04:59.855 02:26:30 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:59.855 02:26:30 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.855 02:26:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.855 02:26:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.855 02:26:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.855 ************************************ 00:04:59.855 START TEST env_mem_callbacks 00:04:59.855 ************************************ 00:04:59.855 02:26:30 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:59.855 EAL: Detected CPU lcores: 96 00:04:59.855 EAL: Detected NUMA nodes: 2 00:04:59.855 EAL: Detected shared linkage of DPDK 00:04:59.855 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:59.855 EAL: Selected IOVA mode 'VA' 00:04:59.855 EAL: VFIO support initialized 00:04:59.855 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:59.855 00:04:59.855 00:04:59.855 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.855 http://cunit.sourceforge.net/ 00:04:59.855 00:04:59.855 00:04:59.855 Suite: memory 00:04:59.855 Test: test ... 00:04:59.855 register 0x200000200000 2097152 00:04:59.855 malloc 3145728 00:04:59.855 register 0x200000400000 4194304 00:04:59.855 buf 0x200000500000 len 3145728 PASSED 00:04:59.855 malloc 64 00:04:59.855 buf 0x2000004fff40 len 64 PASSED 00:04:59.855 malloc 4194304 00:04:59.855 register 0x200000800000 6291456 00:04:59.855 buf 0x200000a00000 len 4194304 PASSED 00:04:59.855 free 0x200000500000 3145728 00:04:59.855 free 0x2000004fff40 64 00:04:59.855 unregister 0x200000400000 4194304 PASSED 00:04:59.855 free 0x200000a00000 4194304 00:04:59.855 unregister 0x200000800000 6291456 PASSED 00:04:59.855 malloc 8388608 00:04:59.855 register 0x200000400000 10485760 00:04:59.855 buf 0x200000600000 len 8388608 PASSED 00:04:59.855 free 0x200000600000 8388608 00:04:59.855 unregister 0x200000400000 10485760 PASSED 00:04:59.855 passed 00:04:59.855 00:04:59.855 Run Summary: Type Total Ran Passed Failed Inactive 00:04:59.855 suites 1 1 n/a 0 0 00:04:59.855 tests 1 1 1 0 0 00:04:59.855 asserts 15 15 15 0 n/a 00:04:59.855 00:04:59.855 Elapsed time = 0.008 seconds 00:04:59.855 00:04:59.855 real 0m0.060s 00:04:59.855 user 0m0.018s 00:04:59.855 sys 0m0.041s 00:04:59.855 02:26:30 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.855 02:26:30 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:59.855 ************************************ 00:04:59.855 END TEST env_mem_callbacks 00:04:59.855 ************************************ 00:04:59.855 00:04:59.855 real 0m6.242s 00:04:59.855 user 0m4.314s 00:04:59.855 sys 0m1.004s 00:04:59.855 02:26:30 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:59.855 02:26:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.855 ************************************ 00:04:59.855 END TEST env 00:04:59.855 ************************************ 00:04:59.855 02:26:30 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:59.855 02:26:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:59.855 02:26:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:59.855 02:26:30 -- common/autotest_common.sh@10 -- # set +x 00:04:59.855 ************************************ 00:04:59.855 START TEST rpc 00:04:59.855 ************************************ 00:04:59.855 02:26:30 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:00.114 * Looking for test storage... 00:05:00.114 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.114 02:26:30 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:00.114 02:26:30 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:00.114 02:26:30 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:00.114 02:26:30 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:00.114 02:26:30 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.114 02:26:30 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.114 02:26:30 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.114 02:26:30 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.114 02:26:30 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.114 02:26:30 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.114 02:26:30 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.114 02:26:30 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.114 02:26:30 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.114 02:26:30 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.114 02:26:30 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.114 02:26:30 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:00.114 02:26:30 rpc -- scripts/common.sh@345 -- # : 1 00:05:00.114 02:26:30 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.114 02:26:30 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.114 02:26:30 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:00.115 02:26:30 rpc -- scripts/common.sh@353 -- # local d=1 00:05:00.115 02:26:30 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.115 02:26:30 rpc -- scripts/common.sh@355 -- # echo 1 00:05:00.115 02:26:30 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.115 02:26:30 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:00.115 02:26:30 rpc -- scripts/common.sh@353 -- # local d=2 00:05:00.115 02:26:30 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.115 02:26:30 rpc -- scripts/common.sh@355 -- # echo 2 00:05:00.115 02:26:30 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.115 02:26:30 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.115 02:26:30 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.115 02:26:30 rpc -- scripts/common.sh@368 -- # return 0 00:05:00.115 02:26:30 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.115 02:26:30 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:00.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.115 --rc genhtml_branch_coverage=1 00:05:00.115 --rc genhtml_function_coverage=1 00:05:00.115 --rc genhtml_legend=1 00:05:00.115 --rc geninfo_all_blocks=1 00:05:00.115 --rc geninfo_unexecuted_blocks=1 00:05:00.115 00:05:00.115 ' 00:05:00.115 02:26:30 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:00.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.115 --rc genhtml_branch_coverage=1 00:05:00.115 --rc genhtml_function_coverage=1 00:05:00.115 --rc genhtml_legend=1 00:05:00.115 --rc geninfo_all_blocks=1 00:05:00.115 --rc geninfo_unexecuted_blocks=1 00:05:00.115 00:05:00.115 ' 00:05:00.115 02:26:30 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:00.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.115 --rc genhtml_branch_coverage=1 00:05:00.115 --rc genhtml_function_coverage=1 00:05:00.115 --rc genhtml_legend=1 00:05:00.115 --rc geninfo_all_blocks=1 00:05:00.115 --rc geninfo_unexecuted_blocks=1 00:05:00.115 00:05:00.115 ' 00:05:00.115 02:26:30 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:00.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.115 --rc genhtml_branch_coverage=1 00:05:00.115 --rc genhtml_function_coverage=1 00:05:00.115 --rc genhtml_legend=1 00:05:00.115 --rc geninfo_all_blocks=1 00:05:00.115 --rc geninfo_unexecuted_blocks=1 00:05:00.115 00:05:00.115 ' 00:05:00.115 02:26:30 rpc -- rpc/rpc.sh@65 -- # spdk_pid=772395 00:05:00.115 02:26:30 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.115 02:26:30 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:00.115 02:26:30 rpc -- rpc/rpc.sh@67 -- # waitforlisten 772395 00:05:00.115 02:26:30 rpc -- common/autotest_common.sh@835 -- # '[' -z 772395 ']' 00:05:00.115 02:26:30 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.115 02:26:30 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.115 02:26:30 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.115 02:26:30 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.115 02:26:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.115 [2024-12-16 02:26:30.706530] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:00.115 [2024-12-16 02:26:30.706574] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772395 ] 00:05:00.374 [2024-12-16 02:26:30.780554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.374 [2024-12-16 02:26:30.802315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:00.374 [2024-12-16 02:26:30.802352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 772395' to capture a snapshot of events at runtime. 00:05:00.374 [2024-12-16 02:26:30.802359] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:00.374 [2024-12-16 02:26:30.802364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:00.374 [2024-12-16 02:26:30.802369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid772395 for offline analysis/debug. 00:05:00.374 [2024-12-16 02:26:30.802864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.374 02:26:31 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.374 02:26:31 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:00.374 02:26:31 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.374 02:26:31 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:00.374 02:26:31 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:00.374 02:26:31 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:00.374 02:26:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.374 02:26:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.374 02:26:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.633 ************************************ 00:05:00.633 START TEST rpc_integrity 00:05:00.633 ************************************ 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:00.633 { 00:05:00.633 "name": "Malloc0", 00:05:00.633 "aliases": [ 00:05:00.633 "802a8684-3c35-4cf5-ada0-d52627d4778f" 00:05:00.633 ], 00:05:00.633 "product_name": "Malloc disk", 00:05:00.633 "block_size": 512, 00:05:00.633 "num_blocks": 16384, 00:05:00.633 "uuid": "802a8684-3c35-4cf5-ada0-d52627d4778f", 00:05:00.633 "assigned_rate_limits": { 00:05:00.633 "rw_ios_per_sec": 0, 00:05:00.633 "rw_mbytes_per_sec": 0, 00:05:00.633 "r_mbytes_per_sec": 0, 00:05:00.633 "w_mbytes_per_sec": 0 00:05:00.633 }, 00:05:00.633 "claimed": false, 00:05:00.633 "zoned": false, 00:05:00.633 "supported_io_types": { 00:05:00.633 "read": true, 00:05:00.633 "write": true, 00:05:00.633 "unmap": true, 00:05:00.633 "flush": true, 00:05:00.633 "reset": true, 00:05:00.633 "nvme_admin": false, 00:05:00.633 "nvme_io": false, 00:05:00.633 "nvme_io_md": false, 00:05:00.633 "write_zeroes": true, 00:05:00.633 "zcopy": true, 00:05:00.633 "get_zone_info": false, 00:05:00.633 "zone_management": false, 00:05:00.633 "zone_append": false, 00:05:00.633 "compare": false, 00:05:00.633 "compare_and_write": false, 00:05:00.633 "abort": true, 00:05:00.633 "seek_hole": false, 00:05:00.633 "seek_data": false, 00:05:00.633 "copy": true, 00:05:00.633 "nvme_iov_md": false 00:05:00.633 }, 00:05:00.633 "memory_domains": [ 00:05:00.633 { 00:05:00.633 "dma_device_id": "system", 00:05:00.633 "dma_device_type": 1 00:05:00.633 }, 00:05:00.633 { 00:05:00.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.633 "dma_device_type": 2 00:05:00.633 } 00:05:00.633 ], 00:05:00.633 "driver_specific": {} 00:05:00.633 } 00:05:00.633 ]' 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.633 [2024-12-16 02:26:31.166240] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:00.633 [2024-12-16 02:26:31.166272] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:00.633 [2024-12-16 02:26:31.166286] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20c0ae0 00:05:00.633 [2024-12-16 02:26:31.166292] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:00.633 [2024-12-16 02:26:31.167545] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:00.633 [2024-12-16 02:26:31.167570] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:00.633 Passthru0 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:00.633 { 00:05:00.633 "name": "Malloc0", 00:05:00.633 "aliases": [ 00:05:00.633 "802a8684-3c35-4cf5-ada0-d52627d4778f" 00:05:00.633 ], 00:05:00.633 "product_name": "Malloc disk", 00:05:00.633 "block_size": 512, 00:05:00.633 "num_blocks": 16384, 00:05:00.633 "uuid": "802a8684-3c35-4cf5-ada0-d52627d4778f", 00:05:00.633 "assigned_rate_limits": { 00:05:00.633 "rw_ios_per_sec": 0, 00:05:00.633 "rw_mbytes_per_sec": 0, 00:05:00.633 "r_mbytes_per_sec": 0, 00:05:00.633 "w_mbytes_per_sec": 0 00:05:00.633 }, 00:05:00.633 "claimed": true, 00:05:00.633 "claim_type": "exclusive_write", 00:05:00.633 "zoned": false, 00:05:00.633 "supported_io_types": { 00:05:00.633 "read": true, 00:05:00.633 "write": true, 00:05:00.633 "unmap": true, 00:05:00.633 "flush": true, 00:05:00.633 "reset": true, 00:05:00.633 "nvme_admin": false, 00:05:00.633 "nvme_io": false, 00:05:00.633 "nvme_io_md": false, 00:05:00.633 "write_zeroes": true, 00:05:00.633 "zcopy": true, 00:05:00.633 "get_zone_info": false, 00:05:00.633 "zone_management": false, 00:05:00.633 "zone_append": false, 00:05:00.633 "compare": false, 00:05:00.633 "compare_and_write": false, 00:05:00.633 "abort": true, 00:05:00.633 "seek_hole": false, 00:05:00.633 "seek_data": false, 00:05:00.633 "copy": true, 00:05:00.633 "nvme_iov_md": false 00:05:00.633 }, 00:05:00.633 "memory_domains": [ 00:05:00.633 { 00:05:00.633 "dma_device_id": "system", 00:05:00.633 "dma_device_type": 1 00:05:00.633 }, 00:05:00.633 { 00:05:00.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.633 "dma_device_type": 2 00:05:00.633 } 00:05:00.633 ], 00:05:00.633 "driver_specific": {} 00:05:00.633 }, 00:05:00.633 { 00:05:00.633 "name": "Passthru0", 00:05:00.633 "aliases": [ 00:05:00.633 "4e36f4c1-d31e-558b-8e8c-315396feef34" 00:05:00.633 ], 00:05:00.633 "product_name": "passthru", 00:05:00.633 "block_size": 512, 00:05:00.633 "num_blocks": 16384, 00:05:00.633 "uuid": "4e36f4c1-d31e-558b-8e8c-315396feef34", 00:05:00.633 "assigned_rate_limits": { 00:05:00.633 "rw_ios_per_sec": 0, 00:05:00.633 "rw_mbytes_per_sec": 0, 00:05:00.633 "r_mbytes_per_sec": 0, 00:05:00.633 "w_mbytes_per_sec": 0 00:05:00.633 }, 00:05:00.633 "claimed": false, 00:05:00.633 "zoned": false, 00:05:00.633 "supported_io_types": { 00:05:00.633 "read": true, 00:05:00.633 "write": true, 00:05:00.633 "unmap": true, 00:05:00.633 "flush": true, 00:05:00.633 "reset": true, 00:05:00.633 "nvme_admin": false, 00:05:00.633 "nvme_io": false, 00:05:00.633 "nvme_io_md": false, 00:05:00.633 "write_zeroes": true, 00:05:00.633 "zcopy": true, 00:05:00.633 "get_zone_info": false, 00:05:00.633 "zone_management": false, 00:05:00.633 "zone_append": false, 00:05:00.633 "compare": false, 00:05:00.633 "compare_and_write": false, 00:05:00.633 "abort": true, 00:05:00.633 "seek_hole": false, 00:05:00.633 "seek_data": false, 00:05:00.633 "copy": true, 00:05:00.633 "nvme_iov_md": false 00:05:00.633 }, 00:05:00.633 "memory_domains": [ 00:05:00.633 { 00:05:00.633 "dma_device_id": "system", 00:05:00.633 "dma_device_type": 1 00:05:00.633 }, 00:05:00.633 { 00:05:00.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.633 "dma_device_type": 2 00:05:00.633 } 00:05:00.633 ], 00:05:00.633 "driver_specific": { 00:05:00.633 "passthru": { 00:05:00.633 "name": "Passthru0", 00:05:00.633 "base_bdev_name": "Malloc0" 00:05:00.633 } 00:05:00.633 } 00:05:00.633 } 00:05:00.633 ]' 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:00.633 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.633 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.634 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.634 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:00.634 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.634 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.634 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.634 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:00.634 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.634 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.634 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.634 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:00.634 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:00.892 02:26:31 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:00.892 00:05:00.892 real 0m0.278s 00:05:00.892 user 0m0.172s 00:05:00.892 sys 0m0.043s 00:05:00.892 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.892 02:26:31 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:00.892 ************************************ 00:05:00.892 END TEST rpc_integrity 00:05:00.892 ************************************ 00:05:00.892 02:26:31 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:00.892 02:26:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.892 02:26:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.892 02:26:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.892 ************************************ 00:05:00.892 START TEST rpc_plugins 00:05:00.892 ************************************ 00:05:00.892 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:00.892 02:26:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:00.892 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.892 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.892 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.892 02:26:31 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:00.892 02:26:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:00.892 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.892 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.892 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.892 02:26:31 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:00.892 { 00:05:00.892 "name": "Malloc1", 00:05:00.892 "aliases": [ 00:05:00.892 "568cbfc8-d618-44b1-b3ff-eb13de2859a4" 00:05:00.892 ], 00:05:00.892 "product_name": "Malloc disk", 00:05:00.892 "block_size": 4096, 00:05:00.892 "num_blocks": 256, 00:05:00.892 "uuid": "568cbfc8-d618-44b1-b3ff-eb13de2859a4", 00:05:00.892 "assigned_rate_limits": { 00:05:00.892 "rw_ios_per_sec": 0, 00:05:00.892 "rw_mbytes_per_sec": 0, 00:05:00.892 "r_mbytes_per_sec": 0, 00:05:00.892 "w_mbytes_per_sec": 0 00:05:00.892 }, 00:05:00.892 "claimed": false, 00:05:00.892 "zoned": false, 00:05:00.892 "supported_io_types": { 00:05:00.892 "read": true, 00:05:00.892 "write": true, 00:05:00.892 "unmap": true, 00:05:00.892 "flush": true, 00:05:00.892 "reset": true, 00:05:00.892 "nvme_admin": false, 00:05:00.892 "nvme_io": false, 00:05:00.892 "nvme_io_md": false, 00:05:00.893 "write_zeroes": true, 00:05:00.893 "zcopy": true, 00:05:00.893 "get_zone_info": false, 00:05:00.893 "zone_management": false, 00:05:00.893 "zone_append": false, 00:05:00.893 "compare": false, 00:05:00.893 "compare_and_write": false, 00:05:00.893 "abort": true, 00:05:00.893 "seek_hole": false, 00:05:00.893 "seek_data": false, 00:05:00.893 "copy": true, 00:05:00.893 "nvme_iov_md": false 00:05:00.893 }, 00:05:00.893 "memory_domains": [ 00:05:00.893 { 00:05:00.893 "dma_device_id": "system", 00:05:00.893 "dma_device_type": 1 00:05:00.893 }, 00:05:00.893 { 00:05:00.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:00.893 "dma_device_type": 2 00:05:00.893 } 00:05:00.893 ], 00:05:00.893 "driver_specific": {} 00:05:00.893 } 00:05:00.893 ]' 00:05:00.893 02:26:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:00.893 02:26:31 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:00.893 02:26:31 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:00.893 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.893 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.893 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.893 02:26:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:00.893 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.893 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.893 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.893 02:26:31 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:00.893 02:26:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:00.893 02:26:31 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:00.893 00:05:00.893 real 0m0.140s 00:05:00.893 user 0m0.088s 00:05:00.893 sys 0m0.016s 00:05:00.893 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.893 02:26:31 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:00.893 ************************************ 00:05:00.893 END TEST rpc_plugins 00:05:00.893 ************************************ 00:05:01.151 02:26:31 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:01.151 02:26:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.151 02:26:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.151 02:26:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.151 ************************************ 00:05:01.151 START TEST rpc_trace_cmd_test 00:05:01.151 ************************************ 00:05:01.151 02:26:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:01.151 02:26:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:01.151 02:26:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:01.151 02:26:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.151 02:26:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:01.151 02:26:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.151 02:26:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:01.151 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid772395", 00:05:01.151 "tpoint_group_mask": "0x8", 00:05:01.151 "iscsi_conn": { 00:05:01.151 "mask": "0x2", 00:05:01.151 "tpoint_mask": "0x0" 00:05:01.151 }, 00:05:01.151 "scsi": { 00:05:01.151 "mask": "0x4", 00:05:01.151 "tpoint_mask": "0x0" 00:05:01.151 }, 00:05:01.151 "bdev": { 00:05:01.151 "mask": "0x8", 00:05:01.151 "tpoint_mask": "0xffffffffffffffff" 00:05:01.151 }, 00:05:01.151 "nvmf_rdma": { 00:05:01.151 "mask": "0x10", 00:05:01.151 "tpoint_mask": "0x0" 00:05:01.151 }, 00:05:01.151 "nvmf_tcp": { 00:05:01.151 "mask": "0x20", 00:05:01.151 "tpoint_mask": "0x0" 00:05:01.151 }, 00:05:01.151 "ftl": { 00:05:01.151 "mask": "0x40", 00:05:01.151 "tpoint_mask": "0x0" 00:05:01.151 }, 00:05:01.151 "blobfs": { 00:05:01.151 "mask": "0x80", 00:05:01.151 "tpoint_mask": "0x0" 00:05:01.151 }, 00:05:01.151 "dsa": { 00:05:01.151 "mask": "0x200", 00:05:01.151 "tpoint_mask": "0x0" 00:05:01.151 }, 00:05:01.151 "thread": { 00:05:01.151 "mask": "0x400", 00:05:01.151 "tpoint_mask": "0x0" 00:05:01.151 }, 00:05:01.151 "nvme_pcie": { 00:05:01.151 "mask": "0x800", 00:05:01.151 "tpoint_mask": "0x0" 00:05:01.152 }, 00:05:01.152 "iaa": { 00:05:01.152 "mask": "0x1000", 00:05:01.152 "tpoint_mask": "0x0" 00:05:01.152 }, 00:05:01.152 "nvme_tcp": { 00:05:01.152 "mask": "0x2000", 00:05:01.152 "tpoint_mask": "0x0" 00:05:01.152 }, 00:05:01.152 "bdev_nvme": { 00:05:01.152 "mask": "0x4000", 00:05:01.152 "tpoint_mask": "0x0" 00:05:01.152 }, 00:05:01.152 "sock": { 00:05:01.152 "mask": "0x8000", 00:05:01.152 "tpoint_mask": "0x0" 00:05:01.152 }, 00:05:01.152 "blob": { 00:05:01.152 "mask": "0x10000", 00:05:01.152 "tpoint_mask": "0x0" 00:05:01.152 }, 00:05:01.152 "bdev_raid": { 00:05:01.152 "mask": "0x20000", 00:05:01.152 "tpoint_mask": "0x0" 00:05:01.152 }, 00:05:01.152 "scheduler": { 00:05:01.152 "mask": "0x40000", 00:05:01.152 "tpoint_mask": "0x0" 00:05:01.152 } 00:05:01.152 }' 00:05:01.152 02:26:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:01.152 02:26:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:01.152 02:26:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:01.152 02:26:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:01.152 02:26:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:01.152 02:26:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:01.152 02:26:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:01.152 02:26:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:01.152 02:26:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:01.152 02:26:31 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:01.152 00:05:01.152 real 0m0.203s 00:05:01.152 user 0m0.173s 00:05:01.152 sys 0m0.022s 00:05:01.152 02:26:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.152 02:26:31 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:01.152 ************************************ 00:05:01.152 END TEST rpc_trace_cmd_test 00:05:01.152 ************************************ 00:05:01.410 02:26:31 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:01.410 02:26:31 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:01.410 02:26:31 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:01.410 02:26:31 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.410 02:26:31 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.410 02:26:31 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.410 ************************************ 00:05:01.410 START TEST rpc_daemon_integrity 00:05:01.410 ************************************ 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.410 02:26:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:01.410 { 00:05:01.410 "name": "Malloc2", 00:05:01.410 "aliases": [ 00:05:01.411 "28b15b2a-5a17-488a-a898-41a89ff16fa4" 00:05:01.411 ], 00:05:01.411 "product_name": "Malloc disk", 00:05:01.411 "block_size": 512, 00:05:01.411 "num_blocks": 16384, 00:05:01.411 "uuid": "28b15b2a-5a17-488a-a898-41a89ff16fa4", 00:05:01.411 "assigned_rate_limits": { 00:05:01.411 "rw_ios_per_sec": 0, 00:05:01.411 "rw_mbytes_per_sec": 0, 00:05:01.411 "r_mbytes_per_sec": 0, 00:05:01.411 "w_mbytes_per_sec": 0 00:05:01.411 }, 00:05:01.411 "claimed": false, 00:05:01.411 "zoned": false, 00:05:01.411 "supported_io_types": { 00:05:01.411 "read": true, 00:05:01.411 "write": true, 00:05:01.411 "unmap": true, 00:05:01.411 "flush": true, 00:05:01.411 "reset": true, 00:05:01.411 "nvme_admin": false, 00:05:01.411 "nvme_io": false, 00:05:01.411 "nvme_io_md": false, 00:05:01.411 "write_zeroes": true, 00:05:01.411 "zcopy": true, 00:05:01.411 "get_zone_info": false, 00:05:01.411 "zone_management": false, 00:05:01.411 "zone_append": false, 00:05:01.411 "compare": false, 00:05:01.411 "compare_and_write": false, 00:05:01.411 "abort": true, 00:05:01.411 "seek_hole": false, 00:05:01.411 "seek_data": false, 00:05:01.411 "copy": true, 00:05:01.411 "nvme_iov_md": false 00:05:01.411 }, 00:05:01.411 "memory_domains": [ 00:05:01.411 { 00:05:01.411 "dma_device_id": "system", 00:05:01.411 "dma_device_type": 1 00:05:01.411 }, 00:05:01.411 { 00:05:01.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.411 "dma_device_type": 2 00:05:01.411 } 00:05:01.411 ], 00:05:01.411 "driver_specific": {} 00:05:01.411 } 00:05:01.411 ]' 00:05:01.411 02:26:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:01.411 02:26:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:01.411 02:26:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:01.411 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.411 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.411 [2024-12-16 02:26:31.992463] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:01.411 [2024-12-16 02:26:31.992494] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:01.411 [2024-12-16 02:26:31.992507] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f7ef80 00:05:01.411 [2024-12-16 02:26:31.992514] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:01.411 [2024-12-16 02:26:31.993520] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:01.411 [2024-12-16 02:26:31.993542] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:01.411 Passthru0 00:05:01.411 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.411 02:26:31 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:01.411 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.411 02:26:31 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.411 02:26:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.411 02:26:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:01.411 { 00:05:01.411 "name": "Malloc2", 00:05:01.411 "aliases": [ 00:05:01.411 "28b15b2a-5a17-488a-a898-41a89ff16fa4" 00:05:01.411 ], 00:05:01.411 "product_name": "Malloc disk", 00:05:01.411 "block_size": 512, 00:05:01.411 "num_blocks": 16384, 00:05:01.411 "uuid": "28b15b2a-5a17-488a-a898-41a89ff16fa4", 00:05:01.411 "assigned_rate_limits": { 00:05:01.411 "rw_ios_per_sec": 0, 00:05:01.411 "rw_mbytes_per_sec": 0, 00:05:01.411 "r_mbytes_per_sec": 0, 00:05:01.411 "w_mbytes_per_sec": 0 00:05:01.411 }, 00:05:01.411 "claimed": true, 00:05:01.411 "claim_type": "exclusive_write", 00:05:01.411 "zoned": false, 00:05:01.411 "supported_io_types": { 00:05:01.411 "read": true, 00:05:01.411 "write": true, 00:05:01.411 "unmap": true, 00:05:01.411 "flush": true, 00:05:01.411 "reset": true, 00:05:01.411 "nvme_admin": false, 00:05:01.411 "nvme_io": false, 00:05:01.411 "nvme_io_md": false, 00:05:01.411 "write_zeroes": true, 00:05:01.411 "zcopy": true, 00:05:01.411 "get_zone_info": false, 00:05:01.411 "zone_management": false, 00:05:01.411 "zone_append": false, 00:05:01.411 "compare": false, 00:05:01.411 "compare_and_write": false, 00:05:01.411 "abort": true, 00:05:01.411 "seek_hole": false, 00:05:01.411 "seek_data": false, 00:05:01.411 "copy": true, 00:05:01.411 "nvme_iov_md": false 00:05:01.411 }, 00:05:01.411 "memory_domains": [ 00:05:01.411 { 00:05:01.411 "dma_device_id": "system", 00:05:01.411 "dma_device_type": 1 00:05:01.411 }, 00:05:01.411 { 00:05:01.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.411 "dma_device_type": 2 00:05:01.411 } 00:05:01.411 ], 00:05:01.411 "driver_specific": {} 00:05:01.411 }, 00:05:01.411 { 00:05:01.411 "name": "Passthru0", 00:05:01.411 "aliases": [ 00:05:01.411 "c75e64d4-c9bf-5e10-b21b-6f69e72d095d" 00:05:01.411 ], 00:05:01.411 "product_name": "passthru", 00:05:01.411 "block_size": 512, 00:05:01.411 "num_blocks": 16384, 00:05:01.411 "uuid": "c75e64d4-c9bf-5e10-b21b-6f69e72d095d", 00:05:01.411 "assigned_rate_limits": { 00:05:01.411 "rw_ios_per_sec": 0, 00:05:01.411 "rw_mbytes_per_sec": 0, 00:05:01.411 "r_mbytes_per_sec": 0, 00:05:01.411 "w_mbytes_per_sec": 0 00:05:01.411 }, 00:05:01.411 "claimed": false, 00:05:01.411 "zoned": false, 00:05:01.411 "supported_io_types": { 00:05:01.411 "read": true, 00:05:01.411 "write": true, 00:05:01.411 "unmap": true, 00:05:01.411 "flush": true, 00:05:01.411 "reset": true, 00:05:01.411 "nvme_admin": false, 00:05:01.411 "nvme_io": false, 00:05:01.411 "nvme_io_md": false, 00:05:01.411 "write_zeroes": true, 00:05:01.411 "zcopy": true, 00:05:01.411 "get_zone_info": false, 00:05:01.411 "zone_management": false, 00:05:01.411 "zone_append": false, 00:05:01.411 "compare": false, 00:05:01.411 "compare_and_write": false, 00:05:01.411 "abort": true, 00:05:01.411 "seek_hole": false, 00:05:01.411 "seek_data": false, 00:05:01.411 "copy": true, 00:05:01.411 "nvme_iov_md": false 00:05:01.411 }, 00:05:01.411 "memory_domains": [ 00:05:01.411 { 00:05:01.411 "dma_device_id": "system", 00:05:01.411 "dma_device_type": 1 00:05:01.411 }, 00:05:01.411 { 00:05:01.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:01.411 "dma_device_type": 2 00:05:01.411 } 00:05:01.411 ], 00:05:01.411 "driver_specific": { 00:05:01.411 "passthru": { 00:05:01.411 "name": "Passthru0", 00:05:01.411 "base_bdev_name": "Malloc2" 00:05:01.411 } 00:05:01.411 } 00:05:01.411 } 00:05:01.411 ]' 00:05:01.411 02:26:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:01.411 02:26:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:01.411 02:26:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:01.411 02:26:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.411 02:26:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:01.670 00:05:01.670 real 0m0.283s 00:05:01.670 user 0m0.178s 00:05:01.670 sys 0m0.038s 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.670 02:26:32 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:01.670 ************************************ 00:05:01.670 END TEST rpc_daemon_integrity 00:05:01.670 ************************************ 00:05:01.670 02:26:32 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:01.670 02:26:32 rpc -- rpc/rpc.sh@84 -- # killprocess 772395 00:05:01.670 02:26:32 rpc -- common/autotest_common.sh@954 -- # '[' -z 772395 ']' 00:05:01.670 02:26:32 rpc -- common/autotest_common.sh@958 -- # kill -0 772395 00:05:01.670 02:26:32 rpc -- common/autotest_common.sh@959 -- # uname 00:05:01.670 02:26:32 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.670 02:26:32 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772395 00:05:01.670 02:26:32 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.670 02:26:32 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.670 02:26:32 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772395' 00:05:01.670 killing process with pid 772395 00:05:01.670 02:26:32 rpc -- common/autotest_common.sh@973 -- # kill 772395 00:05:01.670 02:26:32 rpc -- common/autotest_common.sh@978 -- # wait 772395 00:05:01.929 00:05:01.930 real 0m2.047s 00:05:01.930 user 0m2.606s 00:05:01.930 sys 0m0.701s 00:05:01.930 02:26:32 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.930 02:26:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.930 ************************************ 00:05:01.930 END TEST rpc 00:05:01.930 ************************************ 00:05:01.930 02:26:32 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:01.930 02:26:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.930 02:26:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.930 02:26:32 -- common/autotest_common.sh@10 -- # set +x 00:05:02.189 ************************************ 00:05:02.189 START TEST skip_rpc 00:05:02.189 ************************************ 00:05:02.189 02:26:32 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:02.189 * Looking for test storage... 00:05:02.189 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:02.189 02:26:32 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.189 02:26:32 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.189 02:26:32 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.189 02:26:32 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.189 02:26:32 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:02.189 02:26:32 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.189 02:26:32 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.189 --rc genhtml_branch_coverage=1 00:05:02.189 --rc genhtml_function_coverage=1 00:05:02.189 --rc genhtml_legend=1 00:05:02.189 --rc geninfo_all_blocks=1 00:05:02.189 --rc geninfo_unexecuted_blocks=1 00:05:02.189 00:05:02.189 ' 00:05:02.189 02:26:32 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.189 --rc genhtml_branch_coverage=1 00:05:02.189 --rc genhtml_function_coverage=1 00:05:02.189 --rc genhtml_legend=1 00:05:02.189 --rc geninfo_all_blocks=1 00:05:02.189 --rc geninfo_unexecuted_blocks=1 00:05:02.189 00:05:02.189 ' 00:05:02.189 02:26:32 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.189 --rc genhtml_branch_coverage=1 00:05:02.189 --rc genhtml_function_coverage=1 00:05:02.189 --rc genhtml_legend=1 00:05:02.189 --rc geninfo_all_blocks=1 00:05:02.189 --rc geninfo_unexecuted_blocks=1 00:05:02.189 00:05:02.189 ' 00:05:02.189 02:26:32 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.189 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.189 --rc genhtml_branch_coverage=1 00:05:02.189 --rc genhtml_function_coverage=1 00:05:02.189 --rc genhtml_legend=1 00:05:02.189 --rc geninfo_all_blocks=1 00:05:02.189 --rc geninfo_unexecuted_blocks=1 00:05:02.189 00:05:02.189 ' 00:05:02.189 02:26:32 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:02.189 02:26:32 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:02.189 02:26:32 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:02.189 02:26:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.189 02:26:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.189 02:26:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.189 ************************************ 00:05:02.189 START TEST skip_rpc 00:05:02.189 ************************************ 00:05:02.190 02:26:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:02.190 02:26:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=773016 00:05:02.190 02:26:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.190 02:26:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:02.190 02:26:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:02.448 [2024-12-16 02:26:32.855357] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:02.448 [2024-12-16 02:26:32.855393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773016 ] 00:05:02.448 [2024-12-16 02:26:32.930082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.448 [2024-12-16 02:26:32.952173] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 773016 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 773016 ']' 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 773016 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773016 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773016' 00:05:07.716 killing process with pid 773016 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 773016 00:05:07.716 02:26:37 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 773016 00:05:07.716 00:05:07.716 real 0m5.359s 00:05:07.716 user 0m5.118s 00:05:07.716 sys 0m0.279s 00:05:07.716 02:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.716 02:26:38 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.716 ************************************ 00:05:07.716 END TEST skip_rpc 00:05:07.716 ************************************ 00:05:07.716 02:26:38 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:07.716 02:26:38 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.716 02:26:38 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.716 02:26:38 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.716 ************************************ 00:05:07.716 START TEST skip_rpc_with_json 00:05:07.716 ************************************ 00:05:07.716 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:07.716 02:26:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:07.716 02:26:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=773940 00:05:07.716 02:26:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.716 02:26:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:07.716 02:26:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 773940 00:05:07.716 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 773940 ']' 00:05:07.716 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.716 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.716 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.716 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.716 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.716 [2024-12-16 02:26:38.284626] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:07.716 [2024-12-16 02:26:38.284667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773940 ] 00:05:07.716 [2024-12-16 02:26:38.360723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.975 [2024-12-16 02:26:38.380508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.975 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.975 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:07.975 02:26:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:07.975 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.975 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.975 [2024-12-16 02:26:38.591991] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:07.975 request: 00:05:07.975 { 00:05:07.975 "trtype": "tcp", 00:05:07.975 "method": "nvmf_get_transports", 00:05:07.975 "req_id": 1 00:05:07.975 } 00:05:07.975 Got JSON-RPC error response 00:05:07.975 response: 00:05:07.975 { 00:05:07.975 "code": -19, 00:05:07.975 "message": "No such device" 00:05:07.975 } 00:05:07.975 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:07.975 02:26:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:07.975 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.975 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:07.975 [2024-12-16 02:26:38.604097] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:07.975 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:07.975 02:26:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:07.975 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:07.975 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.234 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:08.235 02:26:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:08.235 { 00:05:08.235 "subsystems": [ 00:05:08.235 { 00:05:08.235 "subsystem": "fsdev", 00:05:08.235 "config": [ 00:05:08.235 { 00:05:08.235 "method": "fsdev_set_opts", 00:05:08.235 "params": { 00:05:08.235 "fsdev_io_pool_size": 65535, 00:05:08.235 "fsdev_io_cache_size": 256 00:05:08.235 } 00:05:08.235 } 00:05:08.235 ] 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "vfio_user_target", 00:05:08.235 "config": null 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "keyring", 00:05:08.235 "config": [] 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "iobuf", 00:05:08.235 "config": [ 00:05:08.235 { 00:05:08.235 "method": "iobuf_set_options", 00:05:08.235 "params": { 00:05:08.235 "small_pool_count": 8192, 00:05:08.235 "large_pool_count": 1024, 00:05:08.235 "small_bufsize": 8192, 00:05:08.235 "large_bufsize": 135168, 00:05:08.235 "enable_numa": false 00:05:08.235 } 00:05:08.235 } 00:05:08.235 ] 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "sock", 00:05:08.235 "config": [ 00:05:08.235 { 00:05:08.235 "method": "sock_set_default_impl", 00:05:08.235 "params": { 00:05:08.235 "impl_name": "posix" 00:05:08.235 } 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "method": "sock_impl_set_options", 00:05:08.235 "params": { 00:05:08.235 "impl_name": "ssl", 00:05:08.235 "recv_buf_size": 4096, 00:05:08.235 "send_buf_size": 4096, 00:05:08.235 "enable_recv_pipe": true, 00:05:08.235 "enable_quickack": false, 00:05:08.235 "enable_placement_id": 0, 00:05:08.235 "enable_zerocopy_send_server": true, 00:05:08.235 "enable_zerocopy_send_client": false, 00:05:08.235 "zerocopy_threshold": 0, 00:05:08.235 "tls_version": 0, 00:05:08.235 "enable_ktls": false 00:05:08.235 } 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "method": "sock_impl_set_options", 00:05:08.235 "params": { 00:05:08.235 "impl_name": "posix", 00:05:08.235 "recv_buf_size": 2097152, 00:05:08.235 "send_buf_size": 2097152, 00:05:08.235 "enable_recv_pipe": true, 00:05:08.235 "enable_quickack": false, 00:05:08.235 "enable_placement_id": 0, 00:05:08.235 "enable_zerocopy_send_server": true, 00:05:08.235 "enable_zerocopy_send_client": false, 00:05:08.235 "zerocopy_threshold": 0, 00:05:08.235 "tls_version": 0, 00:05:08.235 "enable_ktls": false 00:05:08.235 } 00:05:08.235 } 00:05:08.235 ] 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "vmd", 00:05:08.235 "config": [] 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "accel", 00:05:08.235 "config": [ 00:05:08.235 { 00:05:08.235 "method": "accel_set_options", 00:05:08.235 "params": { 00:05:08.235 "small_cache_size": 128, 00:05:08.235 "large_cache_size": 16, 00:05:08.235 "task_count": 2048, 00:05:08.235 "sequence_count": 2048, 00:05:08.235 "buf_count": 2048 00:05:08.235 } 00:05:08.235 } 00:05:08.235 ] 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "bdev", 00:05:08.235 "config": [ 00:05:08.235 { 00:05:08.235 "method": "bdev_set_options", 00:05:08.235 "params": { 00:05:08.235 "bdev_io_pool_size": 65535, 00:05:08.235 "bdev_io_cache_size": 256, 00:05:08.235 "bdev_auto_examine": true, 00:05:08.235 "iobuf_small_cache_size": 128, 00:05:08.235 "iobuf_large_cache_size": 16 00:05:08.235 } 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "method": "bdev_raid_set_options", 00:05:08.235 "params": { 00:05:08.235 "process_window_size_kb": 1024, 00:05:08.235 "process_max_bandwidth_mb_sec": 0 00:05:08.235 } 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "method": "bdev_iscsi_set_options", 00:05:08.235 "params": { 00:05:08.235 "timeout_sec": 30 00:05:08.235 } 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "method": "bdev_nvme_set_options", 00:05:08.235 "params": { 00:05:08.235 "action_on_timeout": "none", 00:05:08.235 "timeout_us": 0, 00:05:08.235 "timeout_admin_us": 0, 00:05:08.235 "keep_alive_timeout_ms": 10000, 00:05:08.235 "arbitration_burst": 0, 00:05:08.235 "low_priority_weight": 0, 00:05:08.235 "medium_priority_weight": 0, 00:05:08.235 "high_priority_weight": 0, 00:05:08.235 "nvme_adminq_poll_period_us": 10000, 00:05:08.235 "nvme_ioq_poll_period_us": 0, 00:05:08.235 "io_queue_requests": 0, 00:05:08.235 "delay_cmd_submit": true, 00:05:08.235 "transport_retry_count": 4, 00:05:08.235 "bdev_retry_count": 3, 00:05:08.235 "transport_ack_timeout": 0, 00:05:08.235 "ctrlr_loss_timeout_sec": 0, 00:05:08.235 "reconnect_delay_sec": 0, 00:05:08.235 "fast_io_fail_timeout_sec": 0, 00:05:08.235 "disable_auto_failback": false, 00:05:08.235 "generate_uuids": false, 00:05:08.235 "transport_tos": 0, 00:05:08.235 "nvme_error_stat": false, 00:05:08.235 "rdma_srq_size": 0, 00:05:08.235 "io_path_stat": false, 00:05:08.235 "allow_accel_sequence": false, 00:05:08.235 "rdma_max_cq_size": 0, 00:05:08.235 "rdma_cm_event_timeout_ms": 0, 00:05:08.235 "dhchap_digests": [ 00:05:08.235 "sha256", 00:05:08.235 "sha384", 00:05:08.235 "sha512" 00:05:08.235 ], 00:05:08.235 "dhchap_dhgroups": [ 00:05:08.235 "null", 00:05:08.235 "ffdhe2048", 00:05:08.235 "ffdhe3072", 00:05:08.235 "ffdhe4096", 00:05:08.235 "ffdhe6144", 00:05:08.235 "ffdhe8192" 00:05:08.235 ], 00:05:08.235 "rdma_umr_per_io": false 00:05:08.235 } 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "method": "bdev_nvme_set_hotplug", 00:05:08.235 "params": { 00:05:08.235 "period_us": 100000, 00:05:08.235 "enable": false 00:05:08.235 } 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "method": "bdev_wait_for_examine" 00:05:08.235 } 00:05:08.235 ] 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "scsi", 00:05:08.235 "config": null 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "scheduler", 00:05:08.235 "config": [ 00:05:08.235 { 00:05:08.235 "method": "framework_set_scheduler", 00:05:08.235 "params": { 00:05:08.235 "name": "static" 00:05:08.235 } 00:05:08.235 } 00:05:08.235 ] 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "vhost_scsi", 00:05:08.235 "config": [] 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "vhost_blk", 00:05:08.235 "config": [] 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "ublk", 00:05:08.235 "config": [] 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "nbd", 00:05:08.235 "config": [] 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "nvmf", 00:05:08.235 "config": [ 00:05:08.235 { 00:05:08.235 "method": "nvmf_set_config", 00:05:08.235 "params": { 00:05:08.235 "discovery_filter": "match_any", 00:05:08.235 "admin_cmd_passthru": { 00:05:08.235 "identify_ctrlr": false 00:05:08.235 }, 00:05:08.235 "dhchap_digests": [ 00:05:08.235 "sha256", 00:05:08.235 "sha384", 00:05:08.235 "sha512" 00:05:08.235 ], 00:05:08.235 "dhchap_dhgroups": [ 00:05:08.235 "null", 00:05:08.235 "ffdhe2048", 00:05:08.235 "ffdhe3072", 00:05:08.235 "ffdhe4096", 00:05:08.235 "ffdhe6144", 00:05:08.235 "ffdhe8192" 00:05:08.235 ] 00:05:08.235 } 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "method": "nvmf_set_max_subsystems", 00:05:08.235 "params": { 00:05:08.235 "max_subsystems": 1024 00:05:08.235 } 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "method": "nvmf_set_crdt", 00:05:08.235 "params": { 00:05:08.235 "crdt1": 0, 00:05:08.235 "crdt2": 0, 00:05:08.235 "crdt3": 0 00:05:08.235 } 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "method": "nvmf_create_transport", 00:05:08.235 "params": { 00:05:08.235 "trtype": "TCP", 00:05:08.235 "max_queue_depth": 128, 00:05:08.235 "max_io_qpairs_per_ctrlr": 127, 00:05:08.235 "in_capsule_data_size": 4096, 00:05:08.235 "max_io_size": 131072, 00:05:08.235 "io_unit_size": 131072, 00:05:08.235 "max_aq_depth": 128, 00:05:08.235 "num_shared_buffers": 511, 00:05:08.235 "buf_cache_size": 4294967295, 00:05:08.235 "dif_insert_or_strip": false, 00:05:08.235 "zcopy": false, 00:05:08.235 "c2h_success": true, 00:05:08.235 "sock_priority": 0, 00:05:08.235 "abort_timeout_sec": 1, 00:05:08.235 "ack_timeout": 0, 00:05:08.235 "data_wr_pool_size": 0 00:05:08.235 } 00:05:08.235 } 00:05:08.235 ] 00:05:08.235 }, 00:05:08.235 { 00:05:08.235 "subsystem": "iscsi", 00:05:08.235 "config": [ 00:05:08.235 { 00:05:08.235 "method": "iscsi_set_options", 00:05:08.235 "params": { 00:05:08.235 "node_base": "iqn.2016-06.io.spdk", 00:05:08.235 "max_sessions": 128, 00:05:08.235 "max_connections_per_session": 2, 00:05:08.235 "max_queue_depth": 64, 00:05:08.235 "default_time2wait": 2, 00:05:08.235 "default_time2retain": 20, 00:05:08.235 "first_burst_length": 8192, 00:05:08.235 "immediate_data": true, 00:05:08.235 "allow_duplicated_isid": false, 00:05:08.235 "error_recovery_level": 0, 00:05:08.235 "nop_timeout": 60, 00:05:08.235 "nop_in_interval": 30, 00:05:08.235 "disable_chap": false, 00:05:08.235 "require_chap": false, 00:05:08.235 "mutual_chap": false, 00:05:08.235 "chap_group": 0, 00:05:08.235 "max_large_datain_per_connection": 64, 00:05:08.235 "max_r2t_per_connection": 4, 00:05:08.235 "pdu_pool_size": 36864, 00:05:08.235 "immediate_data_pool_size": 16384, 00:05:08.235 "data_out_pool_size": 2048 00:05:08.235 } 00:05:08.236 } 00:05:08.236 ] 00:05:08.236 } 00:05:08.236 ] 00:05:08.236 } 00:05:08.236 02:26:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:08.236 02:26:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 773940 00:05:08.236 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 773940 ']' 00:05:08.236 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 773940 00:05:08.236 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:08.236 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.236 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773940 00:05:08.236 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.236 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.236 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773940' 00:05:08.236 killing process with pid 773940 00:05:08.236 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 773940 00:05:08.236 02:26:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 773940 00:05:08.495 02:26:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=774040 00:05:08.495 02:26:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:08.495 02:26:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:13.760 02:26:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 774040 00:05:13.760 02:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 774040 ']' 00:05:13.760 02:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 774040 00:05:13.760 02:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:13.760 02:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.760 02:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 774040 00:05:13.760 02:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.760 02:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.760 02:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 774040' 00:05:13.760 killing process with pid 774040 00:05:13.761 02:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 774040 00:05:13.761 02:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 774040 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:14.020 00:05:14.020 real 0m6.254s 00:05:14.020 user 0m5.979s 00:05:14.020 sys 0m0.581s 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.020 ************************************ 00:05:14.020 END TEST skip_rpc_with_json 00:05:14.020 ************************************ 00:05:14.020 02:26:44 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:14.020 02:26:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.020 02:26:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.020 02:26:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.020 ************************************ 00:05:14.020 START TEST skip_rpc_with_delay 00:05:14.020 ************************************ 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.020 [2024-12-16 02:26:44.617739] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:14.020 00:05:14.020 real 0m0.070s 00:05:14.020 user 0m0.045s 00:05:14.020 sys 0m0.025s 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.020 02:26:44 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:14.020 ************************************ 00:05:14.020 END TEST skip_rpc_with_delay 00:05:14.020 ************************************ 00:05:14.020 02:26:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:14.020 02:26:44 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:14.020 02:26:44 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:14.020 02:26:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.020 02:26:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.020 02:26:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.280 ************************************ 00:05:14.280 START TEST exit_on_failed_rpc_init 00:05:14.280 ************************************ 00:05:14.280 02:26:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:14.280 02:26:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=775085 00:05:14.280 02:26:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 775085 00:05:14.280 02:26:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.280 02:26:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 775085 ']' 00:05:14.280 02:26:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.280 02:26:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.280 02:26:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.280 02:26:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.280 02:26:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.280 [2024-12-16 02:26:44.756380] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:14.280 [2024-12-16 02:26:44.756421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775085 ] 00:05:14.280 [2024-12-16 02:26:44.830565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.280 [2024-12-16 02:26:44.853172] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:14.538 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:14.538 [2024-12-16 02:26:45.117619] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:14.538 [2024-12-16 02:26:45.117659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775130 ] 00:05:14.538 [2024-12-16 02:26:45.192906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.796 [2024-12-16 02:26:45.215590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.796 [2024-12-16 02:26:45.215646] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:14.796 [2024-12-16 02:26:45.215655] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:14.796 [2024-12-16 02:26:45.215661] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 775085 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 775085 ']' 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 775085 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 775085 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 775085' 00:05:14.796 killing process with pid 775085 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 775085 00:05:14.796 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 775085 00:05:15.055 00:05:15.055 real 0m0.896s 00:05:15.055 user 0m0.931s 00:05:15.055 sys 0m0.392s 00:05:15.055 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.055 02:26:45 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.055 ************************************ 00:05:15.055 END TEST exit_on_failed_rpc_init 00:05:15.055 ************************************ 00:05:15.055 02:26:45 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:15.055 00:05:15.055 real 0m13.045s 00:05:15.055 user 0m12.274s 00:05:15.055 sys 0m1.573s 00:05:15.055 02:26:45 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.055 02:26:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.055 ************************************ 00:05:15.055 END TEST skip_rpc 00:05:15.055 ************************************ 00:05:15.055 02:26:45 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:15.055 02:26:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.055 02:26:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.055 02:26:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.055 ************************************ 00:05:15.055 START TEST rpc_client 00:05:15.056 ************************************ 00:05:15.056 02:26:45 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:15.315 * Looking for test storage... 00:05:15.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:15.315 02:26:45 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.315 02:26:45 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.315 02:26:45 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.315 02:26:45 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.315 02:26:45 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:15.315 02:26:45 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.315 02:26:45 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.315 --rc genhtml_branch_coverage=1 00:05:15.315 --rc genhtml_function_coverage=1 00:05:15.315 --rc genhtml_legend=1 00:05:15.315 --rc geninfo_all_blocks=1 00:05:15.315 --rc geninfo_unexecuted_blocks=1 00:05:15.315 00:05:15.315 ' 00:05:15.315 02:26:45 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.315 --rc genhtml_branch_coverage=1 00:05:15.315 --rc genhtml_function_coverage=1 00:05:15.315 --rc genhtml_legend=1 00:05:15.315 --rc geninfo_all_blocks=1 00:05:15.315 --rc geninfo_unexecuted_blocks=1 00:05:15.315 00:05:15.315 ' 00:05:15.315 02:26:45 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.315 --rc genhtml_branch_coverage=1 00:05:15.315 --rc genhtml_function_coverage=1 00:05:15.315 --rc genhtml_legend=1 00:05:15.315 --rc geninfo_all_blocks=1 00:05:15.315 --rc geninfo_unexecuted_blocks=1 00:05:15.315 00:05:15.315 ' 00:05:15.315 02:26:45 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.315 --rc genhtml_branch_coverage=1 00:05:15.315 --rc genhtml_function_coverage=1 00:05:15.315 --rc genhtml_legend=1 00:05:15.315 --rc geninfo_all_blocks=1 00:05:15.315 --rc geninfo_unexecuted_blocks=1 00:05:15.315 00:05:15.315 ' 00:05:15.315 02:26:45 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:15.315 OK 00:05:15.315 02:26:45 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:15.315 00:05:15.315 real 0m0.199s 00:05:15.315 user 0m0.117s 00:05:15.315 sys 0m0.095s 00:05:15.315 02:26:45 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.315 02:26:45 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:15.315 ************************************ 00:05:15.315 END TEST rpc_client 00:05:15.315 ************************************ 00:05:15.315 02:26:45 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:15.315 02:26:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.315 02:26:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.315 02:26:45 -- common/autotest_common.sh@10 -- # set +x 00:05:15.575 ************************************ 00:05:15.575 START TEST json_config 00:05:15.575 ************************************ 00:05:15.575 02:26:45 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:15.575 02:26:46 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:15.575 02:26:46 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:15.575 02:26:46 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:15.575 02:26:46 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:15.575 02:26:46 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.575 02:26:46 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.575 02:26:46 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.575 02:26:46 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.575 02:26:46 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.575 02:26:46 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.575 02:26:46 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.575 02:26:46 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.575 02:26:46 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.575 02:26:46 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.575 02:26:46 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.575 02:26:46 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:15.575 02:26:46 json_config -- scripts/common.sh@345 -- # : 1 00:05:15.575 02:26:46 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.575 02:26:46 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.575 02:26:46 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:15.575 02:26:46 json_config -- scripts/common.sh@353 -- # local d=1 00:05:15.575 02:26:46 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.575 02:26:46 json_config -- scripts/common.sh@355 -- # echo 1 00:05:15.575 02:26:46 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.575 02:26:46 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:15.575 02:26:46 json_config -- scripts/common.sh@353 -- # local d=2 00:05:15.575 02:26:46 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.575 02:26:46 json_config -- scripts/common.sh@355 -- # echo 2 00:05:15.575 02:26:46 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.575 02:26:46 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.575 02:26:46 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.575 02:26:46 json_config -- scripts/common.sh@368 -- # return 0 00:05:15.575 02:26:46 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.575 02:26:46 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:15.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.575 --rc genhtml_branch_coverage=1 00:05:15.575 --rc genhtml_function_coverage=1 00:05:15.575 --rc genhtml_legend=1 00:05:15.575 --rc geninfo_all_blocks=1 00:05:15.575 --rc geninfo_unexecuted_blocks=1 00:05:15.575 00:05:15.575 ' 00:05:15.575 02:26:46 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:15.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.575 --rc genhtml_branch_coverage=1 00:05:15.575 --rc genhtml_function_coverage=1 00:05:15.575 --rc genhtml_legend=1 00:05:15.575 --rc geninfo_all_blocks=1 00:05:15.575 --rc geninfo_unexecuted_blocks=1 00:05:15.575 00:05:15.575 ' 00:05:15.575 02:26:46 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:15.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.575 --rc genhtml_branch_coverage=1 00:05:15.575 --rc genhtml_function_coverage=1 00:05:15.575 --rc genhtml_legend=1 00:05:15.575 --rc geninfo_all_blocks=1 00:05:15.575 --rc geninfo_unexecuted_blocks=1 00:05:15.575 00:05:15.575 ' 00:05:15.575 02:26:46 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:15.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.575 --rc genhtml_branch_coverage=1 00:05:15.575 --rc genhtml_function_coverage=1 00:05:15.575 --rc genhtml_legend=1 00:05:15.575 --rc geninfo_all_blocks=1 00:05:15.575 --rc geninfo_unexecuted_blocks=1 00:05:15.575 00:05:15.575 ' 00:05:15.575 02:26:46 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:15.575 02:26:46 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:15.575 02:26:46 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:15.575 02:26:46 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:15.575 02:26:46 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:15.575 02:26:46 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:15.575 02:26:46 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:15.575 02:26:46 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:15.575 02:26:46 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:15.575 02:26:46 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:15.575 02:26:46 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:15.575 02:26:46 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:15.576 02:26:46 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:15.576 02:26:46 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:15.576 02:26:46 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:15.576 02:26:46 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:15.576 02:26:46 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.576 02:26:46 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.576 02:26:46 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.576 02:26:46 json_config -- paths/export.sh@5 -- # export PATH 00:05:15.576 02:26:46 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@51 -- # : 0 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:15.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:15.576 02:26:46 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:15.576 INFO: JSON configuration test init 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:15.576 02:26:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.576 02:26:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:15.576 02:26:46 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:15.576 02:26:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.576 02:26:46 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:15.576 02:26:46 json_config -- json_config/common.sh@9 -- # local app=target 00:05:15.576 02:26:46 json_config -- json_config/common.sh@10 -- # shift 00:05:15.576 02:26:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:15.576 02:26:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:15.576 02:26:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:15.576 02:26:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.576 02:26:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:15.576 02:26:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=775478 00:05:15.576 02:26:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:15.576 Waiting for target to run... 00:05:15.576 02:26:46 json_config -- json_config/common.sh@25 -- # waitforlisten 775478 /var/tmp/spdk_tgt.sock 00:05:15.576 02:26:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:15.576 02:26:46 json_config -- common/autotest_common.sh@835 -- # '[' -z 775478 ']' 00:05:15.576 02:26:46 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:15.576 02:26:46 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:15.576 02:26:46 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:15.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:15.576 02:26:46 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:15.576 02:26:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:15.576 [2024-12-16 02:26:46.230593] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:15.576 [2024-12-16 02:26:46.230644] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775478 ] 00:05:16.143 [2024-12-16 02:26:46.680932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.143 [2024-12-16 02:26:46.702990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.401 02:26:47 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.401 02:26:47 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:16.401 02:26:47 json_config -- json_config/common.sh@26 -- # echo '' 00:05:16.401 00:05:16.401 02:26:47 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:16.401 02:26:47 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:16.401 02:26:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:16.401 02:26:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.401 02:26:47 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:16.401 02:26:47 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:16.401 02:26:47 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.401 02:26:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.659 02:26:47 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:16.659 02:26:47 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:16.659 02:26:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:19.945 02:26:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.945 02:26:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:19.945 02:26:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@54 -- # sort 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:19.945 02:26:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:19.945 02:26:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:19.945 02:26:50 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:19.945 02:26:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:19.945 02:26:50 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:19.945 02:26:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:19.945 MallocForNvmf0 00:05:20.203 02:26:50 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:20.203 02:26:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:20.203 MallocForNvmf1 00:05:20.203 02:26:50 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:20.203 02:26:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:20.461 [2024-12-16 02:26:50.971364] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.461 02:26:51 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:20.461 02:26:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:20.720 02:26:51 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:20.720 02:26:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:20.978 02:26:51 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:20.978 02:26:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:20.978 02:26:51 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:20.978 02:26:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:21.237 [2024-12-16 02:26:51.761722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:21.237 02:26:51 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:21.237 02:26:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.237 02:26:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.237 02:26:51 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:21.237 02:26:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.237 02:26:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.237 02:26:51 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:21.237 02:26:51 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:21.237 02:26:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:21.495 MallocBdevForConfigChangeCheck 00:05:21.495 02:26:52 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:21.495 02:26:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:21.495 02:26:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:21.495 02:26:52 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:21.495 02:26:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.753 02:26:52 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:21.753 INFO: shutting down applications... 00:05:21.753 02:26:52 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:21.753 02:26:52 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:21.753 02:26:52 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:21.753 02:26:52 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:23.654 Calling clear_iscsi_subsystem 00:05:23.654 Calling clear_nvmf_subsystem 00:05:23.654 Calling clear_nbd_subsystem 00:05:23.654 Calling clear_ublk_subsystem 00:05:23.654 Calling clear_vhost_blk_subsystem 00:05:23.654 Calling clear_vhost_scsi_subsystem 00:05:23.654 Calling clear_bdev_subsystem 00:05:23.654 02:26:54 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:23.654 02:26:54 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:23.654 02:26:54 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:23.654 02:26:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.654 02:26:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:23.654 02:26:54 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:23.913 02:26:54 json_config -- json_config/json_config.sh@352 -- # break 00:05:23.913 02:26:54 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:23.913 02:26:54 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:23.913 02:26:54 json_config -- json_config/common.sh@31 -- # local app=target 00:05:23.913 02:26:54 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:23.913 02:26:54 json_config -- json_config/common.sh@35 -- # [[ -n 775478 ]] 00:05:23.913 02:26:54 json_config -- json_config/common.sh@38 -- # kill -SIGINT 775478 00:05:23.913 02:26:54 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:23.913 02:26:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:23.913 02:26:54 json_config -- json_config/common.sh@41 -- # kill -0 775478 00:05:23.913 02:26:54 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:24.480 02:26:54 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:24.480 02:26:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:24.480 02:26:54 json_config -- json_config/common.sh@41 -- # kill -0 775478 00:05:24.480 02:26:54 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:24.480 02:26:54 json_config -- json_config/common.sh@43 -- # break 00:05:24.480 02:26:54 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:24.480 02:26:54 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:24.480 SPDK target shutdown done 00:05:24.480 02:26:54 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:24.480 INFO: relaunching applications... 00:05:24.480 02:26:54 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.480 02:26:54 json_config -- json_config/common.sh@9 -- # local app=target 00:05:24.480 02:26:54 json_config -- json_config/common.sh@10 -- # shift 00:05:24.480 02:26:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.480 02:26:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.480 02:26:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.480 02:26:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.480 02:26:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.480 02:26:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=776958 00:05:24.480 02:26:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.480 Waiting for target to run... 00:05:24.480 02:26:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.480 02:26:54 json_config -- json_config/common.sh@25 -- # waitforlisten 776958 /var/tmp/spdk_tgt.sock 00:05:24.481 02:26:54 json_config -- common/autotest_common.sh@835 -- # '[' -z 776958 ']' 00:05:24.481 02:26:54 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.481 02:26:54 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.481 02:26:54 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.481 02:26:54 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.481 02:26:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.481 [2024-12-16 02:26:54.961104] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:24.481 [2024-12-16 02:26:54.961161] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776958 ] 00:05:25.047 [2024-12-16 02:26:55.412682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.047 [2024-12-16 02:26:55.433623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.333 [2024-12-16 02:26:58.438442] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:28.333 [2024-12-16 02:26:58.470705] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:28.592 02:26:59 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.592 02:26:59 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:28.592 02:26:59 json_config -- json_config/common.sh@26 -- # echo '' 00:05:28.592 00:05:28.592 02:26:59 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:28.592 02:26:59 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:28.592 INFO: Checking if target configuration is the same... 00:05:28.592 02:26:59 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.592 02:26:59 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:28.592 02:26:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:28.592 + '[' 2 -ne 2 ']' 00:05:28.592 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:28.592 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:28.592 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:28.592 +++ basename /dev/fd/62 00:05:28.592 ++ mktemp /tmp/62.XXX 00:05:28.592 + tmp_file_1=/tmp/62.uRo 00:05:28.592 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:28.592 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:28.592 + tmp_file_2=/tmp/spdk_tgt_config.json.DlL 00:05:28.592 + ret=0 00:05:28.592 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.158 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.158 + diff -u /tmp/62.uRo /tmp/spdk_tgt_config.json.DlL 00:05:29.158 + echo 'INFO: JSON config files are the same' 00:05:29.158 INFO: JSON config files are the same 00:05:29.158 + rm /tmp/62.uRo /tmp/spdk_tgt_config.json.DlL 00:05:29.158 + exit 0 00:05:29.158 02:26:59 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:29.158 02:26:59 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:29.158 INFO: changing configuration and checking if this can be detected... 00:05:29.158 02:26:59 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:29.158 02:26:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:29.158 02:26:59 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.158 02:26:59 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:29.158 02:26:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:29.158 + '[' 2 -ne 2 ']' 00:05:29.158 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:29.158 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:29.158 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:29.158 +++ basename /dev/fd/62 00:05:29.158 ++ mktemp /tmp/62.XXX 00:05:29.158 + tmp_file_1=/tmp/62.WBk 00:05:29.158 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.158 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:29.158 + tmp_file_2=/tmp/spdk_tgt_config.json.R1D 00:05:29.158 + ret=0 00:05:29.158 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.724 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:29.724 + diff -u /tmp/62.WBk /tmp/spdk_tgt_config.json.R1D 00:05:29.724 + ret=1 00:05:29.724 + echo '=== Start of file: /tmp/62.WBk ===' 00:05:29.724 + cat /tmp/62.WBk 00:05:29.724 + echo '=== End of file: /tmp/62.WBk ===' 00:05:29.724 + echo '' 00:05:29.724 + echo '=== Start of file: /tmp/spdk_tgt_config.json.R1D ===' 00:05:29.724 + cat /tmp/spdk_tgt_config.json.R1D 00:05:29.724 + echo '=== End of file: /tmp/spdk_tgt_config.json.R1D ===' 00:05:29.724 + echo '' 00:05:29.724 + rm /tmp/62.WBk /tmp/spdk_tgt_config.json.R1D 00:05:29.724 + exit 1 00:05:29.724 02:27:00 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:29.724 INFO: configuration change detected. 00:05:29.724 02:27:00 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:29.724 02:27:00 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:29.724 02:27:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.724 02:27:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.724 02:27:00 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:29.724 02:27:00 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:29.725 02:27:00 json_config -- json_config/json_config.sh@324 -- # [[ -n 776958 ]] 00:05:29.725 02:27:00 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:29.725 02:27:00 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.725 02:27:00 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:29.725 02:27:00 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:29.725 02:27:00 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:29.725 02:27:00 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:29.725 02:27:00 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:29.725 02:27:00 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:29.725 02:27:00 json_config -- json_config/json_config.sh@330 -- # killprocess 776958 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@954 -- # '[' -z 776958 ']' 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@958 -- # kill -0 776958 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@959 -- # uname 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 776958 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 776958' 00:05:29.725 killing process with pid 776958 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@973 -- # kill 776958 00:05:29.725 02:27:00 json_config -- common/autotest_common.sh@978 -- # wait 776958 00:05:31.636 02:27:01 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:31.636 02:27:01 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:31.636 02:27:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:31.636 02:27:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.636 02:27:01 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:31.636 02:27:01 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:31.636 INFO: Success 00:05:31.636 00:05:31.636 real 0m15.871s 00:05:31.636 user 0m16.926s 00:05:31.636 sys 0m2.145s 00:05:31.636 02:27:01 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.636 02:27:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:31.636 ************************************ 00:05:31.636 END TEST json_config 00:05:31.636 ************************************ 00:05:31.636 02:27:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:31.636 02:27:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.636 02:27:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.636 02:27:01 -- common/autotest_common.sh@10 -- # set +x 00:05:31.636 ************************************ 00:05:31.636 START TEST json_config_extra_key 00:05:31.636 ************************************ 00:05:31.636 02:27:01 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:31.636 02:27:01 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.636 02:27:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.636 02:27:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.636 02:27:02 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:31.636 02:27:02 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.636 02:27:02 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.636 --rc genhtml_branch_coverage=1 00:05:31.636 --rc genhtml_function_coverage=1 00:05:31.636 --rc genhtml_legend=1 00:05:31.636 --rc geninfo_all_blocks=1 00:05:31.636 --rc geninfo_unexecuted_blocks=1 00:05:31.636 00:05:31.636 ' 00:05:31.636 02:27:02 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.636 --rc genhtml_branch_coverage=1 00:05:31.636 --rc genhtml_function_coverage=1 00:05:31.636 --rc genhtml_legend=1 00:05:31.636 --rc geninfo_all_blocks=1 00:05:31.636 --rc geninfo_unexecuted_blocks=1 00:05:31.636 00:05:31.636 ' 00:05:31.636 02:27:02 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.636 --rc genhtml_branch_coverage=1 00:05:31.636 --rc genhtml_function_coverage=1 00:05:31.636 --rc genhtml_legend=1 00:05:31.636 --rc geninfo_all_blocks=1 00:05:31.636 --rc geninfo_unexecuted_blocks=1 00:05:31.636 00:05:31.636 ' 00:05:31.636 02:27:02 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.636 --rc genhtml_branch_coverage=1 00:05:31.636 --rc genhtml_function_coverage=1 00:05:31.636 --rc genhtml_legend=1 00:05:31.636 --rc geninfo_all_blocks=1 00:05:31.636 --rc geninfo_unexecuted_blocks=1 00:05:31.636 00:05:31.636 ' 00:05:31.636 02:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:31.636 02:27:02 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:31.636 02:27:02 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:31.636 02:27:02 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.636 02:27:02 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.637 02:27:02 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.637 02:27:02 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:31.637 02:27:02 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:31.637 02:27:02 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:31.637 02:27:02 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:31.637 02:27:02 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:31.637 02:27:02 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:31.637 02:27:02 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:31.637 02:27:02 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:31.637 02:27:02 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:31.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:31.637 02:27:02 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:31.637 02:27:02 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:31.637 02:27:02 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:31.637 02:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:31.637 02:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:31.637 02:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:31.637 02:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:31.637 02:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:31.637 02:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:31.637 02:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:31.637 02:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:31.637 02:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:31.637 02:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:31.637 02:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:31.637 INFO: launching applications... 00:05:31.637 02:27:02 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:31.637 02:27:02 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:31.637 02:27:02 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:31.637 02:27:02 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:31.637 02:27:02 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:31.637 02:27:02 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:31.637 02:27:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.637 02:27:02 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:31.637 02:27:02 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=778393 00:05:31.637 02:27:02 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:31.637 Waiting for target to run... 00:05:31.637 02:27:02 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 778393 /var/tmp/spdk_tgt.sock 00:05:31.637 02:27:02 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 778393 ']' 00:05:31.637 02:27:02 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:31.637 02:27:02 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:31.637 02:27:02 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.637 02:27:02 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:31.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:31.637 02:27:02 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.637 02:27:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:31.637 [2024-12-16 02:27:02.164667] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:31.637 [2024-12-16 02:27:02.164713] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778393 ] 00:05:31.896 [2024-12-16 02:27:02.446342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.896 [2024-12-16 02:27:02.459049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.463 02:27:03 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.463 02:27:03 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:32.463 02:27:03 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:32.463 00:05:32.463 02:27:03 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:32.463 INFO: shutting down applications... 00:05:32.463 02:27:03 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:32.463 02:27:03 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:32.463 02:27:03 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:32.463 02:27:03 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 778393 ]] 00:05:32.463 02:27:03 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 778393 00:05:32.463 02:27:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:32.463 02:27:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.463 02:27:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 778393 00:05:32.463 02:27:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.032 02:27:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.032 02:27:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.032 02:27:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 778393 00:05:33.032 02:27:03 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:33.032 02:27:03 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:33.032 02:27:03 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:33.032 02:27:03 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:33.032 SPDK target shutdown done 00:05:33.032 02:27:03 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:33.032 Success 00:05:33.032 00:05:33.032 real 0m1.596s 00:05:33.032 user 0m1.400s 00:05:33.032 sys 0m0.390s 00:05:33.032 02:27:03 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.032 02:27:03 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:33.032 ************************************ 00:05:33.032 END TEST json_config_extra_key 00:05:33.032 ************************************ 00:05:33.032 02:27:03 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.032 02:27:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:33.032 02:27:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.032 02:27:03 -- common/autotest_common.sh@10 -- # set +x 00:05:33.032 ************************************ 00:05:33.032 START TEST alias_rpc 00:05:33.032 ************************************ 00:05:33.032 02:27:03 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:33.032 * Looking for test storage... 00:05:33.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:33.291 02:27:03 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:33.291 02:27:03 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:33.291 02:27:03 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:33.291 02:27:03 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.291 02:27:03 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:33.292 02:27:03 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.292 02:27:03 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.292 02:27:03 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.292 02:27:03 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:33.292 02:27:03 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.292 02:27:03 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:33.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.292 --rc genhtml_branch_coverage=1 00:05:33.292 --rc genhtml_function_coverage=1 00:05:33.292 --rc genhtml_legend=1 00:05:33.292 --rc geninfo_all_blocks=1 00:05:33.292 --rc geninfo_unexecuted_blocks=1 00:05:33.292 00:05:33.292 ' 00:05:33.292 02:27:03 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:33.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.292 --rc genhtml_branch_coverage=1 00:05:33.292 --rc genhtml_function_coverage=1 00:05:33.292 --rc genhtml_legend=1 00:05:33.292 --rc geninfo_all_blocks=1 00:05:33.292 --rc geninfo_unexecuted_blocks=1 00:05:33.292 00:05:33.292 ' 00:05:33.292 02:27:03 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:33.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.292 --rc genhtml_branch_coverage=1 00:05:33.292 --rc genhtml_function_coverage=1 00:05:33.292 --rc genhtml_legend=1 00:05:33.292 --rc geninfo_all_blocks=1 00:05:33.292 --rc geninfo_unexecuted_blocks=1 00:05:33.292 00:05:33.292 ' 00:05:33.292 02:27:03 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:33.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.292 --rc genhtml_branch_coverage=1 00:05:33.292 --rc genhtml_function_coverage=1 00:05:33.292 --rc genhtml_legend=1 00:05:33.292 --rc geninfo_all_blocks=1 00:05:33.292 --rc geninfo_unexecuted_blocks=1 00:05:33.292 00:05:33.292 ' 00:05:33.292 02:27:03 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:33.292 02:27:03 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=778822 00:05:33.292 02:27:03 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 778822 00:05:33.292 02:27:03 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.292 02:27:03 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 778822 ']' 00:05:33.292 02:27:03 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.292 02:27:03 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.292 02:27:03 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.292 02:27:03 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.292 02:27:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.292 [2024-12-16 02:27:03.829126] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:33.292 [2024-12-16 02:27:03.829177] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778822 ] 00:05:33.292 [2024-12-16 02:27:03.886377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.292 [2024-12-16 02:27:03.908744] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.550 02:27:04 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.550 02:27:04 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:33.550 02:27:04 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:33.809 02:27:04 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 778822 00:05:33.809 02:27:04 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 778822 ']' 00:05:33.809 02:27:04 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 778822 00:05:33.809 02:27:04 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:33.809 02:27:04 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.809 02:27:04 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 778822 00:05:33.809 02:27:04 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.809 02:27:04 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.809 02:27:04 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 778822' 00:05:33.809 killing process with pid 778822 00:05:33.809 02:27:04 alias_rpc -- common/autotest_common.sh@973 -- # kill 778822 00:05:33.809 02:27:04 alias_rpc -- common/autotest_common.sh@978 -- # wait 778822 00:05:34.068 00:05:34.068 real 0m1.091s 00:05:34.068 user 0m1.156s 00:05:34.068 sys 0m0.401s 00:05:34.068 02:27:04 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.068 02:27:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.068 ************************************ 00:05:34.068 END TEST alias_rpc 00:05:34.068 ************************************ 00:05:34.068 02:27:04 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:34.068 02:27:04 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.068 02:27:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.068 02:27:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.068 02:27:04 -- common/autotest_common.sh@10 -- # set +x 00:05:34.327 ************************************ 00:05:34.327 START TEST spdkcli_tcp 00:05:34.327 ************************************ 00:05:34.327 02:27:04 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:34.327 * Looking for test storage... 00:05:34.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:34.327 02:27:04 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:34.327 02:27:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:34.327 02:27:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:34.327 02:27:04 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.327 02:27:04 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:34.327 02:27:04 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.327 02:27:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:34.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.327 --rc genhtml_branch_coverage=1 00:05:34.327 --rc genhtml_function_coverage=1 00:05:34.327 --rc genhtml_legend=1 00:05:34.327 --rc geninfo_all_blocks=1 00:05:34.327 --rc geninfo_unexecuted_blocks=1 00:05:34.327 00:05:34.327 ' 00:05:34.327 02:27:04 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:34.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.327 --rc genhtml_branch_coverage=1 00:05:34.327 --rc genhtml_function_coverage=1 00:05:34.327 --rc genhtml_legend=1 00:05:34.327 --rc geninfo_all_blocks=1 00:05:34.327 --rc geninfo_unexecuted_blocks=1 00:05:34.327 00:05:34.328 ' 00:05:34.328 02:27:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:34.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.328 --rc genhtml_branch_coverage=1 00:05:34.328 --rc genhtml_function_coverage=1 00:05:34.328 --rc genhtml_legend=1 00:05:34.328 --rc geninfo_all_blocks=1 00:05:34.328 --rc geninfo_unexecuted_blocks=1 00:05:34.328 00:05:34.328 ' 00:05:34.328 02:27:04 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:34.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.328 --rc genhtml_branch_coverage=1 00:05:34.328 --rc genhtml_function_coverage=1 00:05:34.328 --rc genhtml_legend=1 00:05:34.328 --rc geninfo_all_blocks=1 00:05:34.328 --rc geninfo_unexecuted_blocks=1 00:05:34.328 00:05:34.328 ' 00:05:34.328 02:27:04 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:34.328 02:27:04 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:34.328 02:27:04 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:34.328 02:27:04 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:34.328 02:27:04 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:34.328 02:27:04 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:34.328 02:27:04 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:34.328 02:27:04 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:34.328 02:27:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.328 02:27:04 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=779013 00:05:34.328 02:27:04 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 779013 00:05:34.328 02:27:04 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:34.328 02:27:04 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 779013 ']' 00:05:34.328 02:27:04 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.328 02:27:04 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.328 02:27:04 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.328 02:27:04 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.328 02:27:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:34.586 [2024-12-16 02:27:04.998197] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:34.586 [2024-12-16 02:27:04.998243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779013 ] 00:05:34.586 [2024-12-16 02:27:05.073429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.586 [2024-12-16 02:27:05.098251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.586 [2024-12-16 02:27:05.098255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.842 02:27:05 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:34.843 02:27:05 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:34.843 02:27:05 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=779120 00:05:34.843 02:27:05 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:34.843 02:27:05 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:34.843 [ 00:05:34.843 "bdev_malloc_delete", 00:05:34.843 "bdev_malloc_create", 00:05:34.843 "bdev_null_resize", 00:05:34.843 "bdev_null_delete", 00:05:34.843 "bdev_null_create", 00:05:34.843 "bdev_nvme_cuse_unregister", 00:05:34.843 "bdev_nvme_cuse_register", 00:05:34.843 "bdev_opal_new_user", 00:05:34.843 "bdev_opal_set_lock_state", 00:05:34.843 "bdev_opal_delete", 00:05:34.843 "bdev_opal_get_info", 00:05:34.843 "bdev_opal_create", 00:05:34.843 "bdev_nvme_opal_revert", 00:05:34.843 "bdev_nvme_opal_init", 00:05:34.843 "bdev_nvme_send_cmd", 00:05:34.843 "bdev_nvme_set_keys", 00:05:34.843 "bdev_nvme_get_path_iostat", 00:05:34.843 "bdev_nvme_get_mdns_discovery_info", 00:05:34.843 "bdev_nvme_stop_mdns_discovery", 00:05:34.843 "bdev_nvme_start_mdns_discovery", 00:05:34.843 "bdev_nvme_set_multipath_policy", 00:05:34.843 "bdev_nvme_set_preferred_path", 00:05:34.843 "bdev_nvme_get_io_paths", 00:05:34.843 "bdev_nvme_remove_error_injection", 00:05:34.843 "bdev_nvme_add_error_injection", 00:05:34.843 "bdev_nvme_get_discovery_info", 00:05:34.843 "bdev_nvme_stop_discovery", 00:05:34.843 "bdev_nvme_start_discovery", 00:05:34.843 "bdev_nvme_get_controller_health_info", 00:05:34.843 "bdev_nvme_disable_controller", 00:05:34.843 "bdev_nvme_enable_controller", 00:05:34.843 "bdev_nvme_reset_controller", 00:05:34.843 "bdev_nvme_get_transport_statistics", 00:05:34.843 "bdev_nvme_apply_firmware", 00:05:34.843 "bdev_nvme_detach_controller", 00:05:34.843 "bdev_nvme_get_controllers", 00:05:34.843 "bdev_nvme_attach_controller", 00:05:34.843 "bdev_nvme_set_hotplug", 00:05:34.843 "bdev_nvme_set_options", 00:05:34.843 "bdev_passthru_delete", 00:05:34.843 "bdev_passthru_create", 00:05:34.843 "bdev_lvol_set_parent_bdev", 00:05:34.843 "bdev_lvol_set_parent", 00:05:34.843 "bdev_lvol_check_shallow_copy", 00:05:34.843 "bdev_lvol_start_shallow_copy", 00:05:34.843 "bdev_lvol_grow_lvstore", 00:05:34.843 "bdev_lvol_get_lvols", 00:05:34.843 "bdev_lvol_get_lvstores", 00:05:34.843 "bdev_lvol_delete", 00:05:34.843 "bdev_lvol_set_read_only", 00:05:34.843 "bdev_lvol_resize", 00:05:34.843 "bdev_lvol_decouple_parent", 00:05:34.843 "bdev_lvol_inflate", 00:05:34.843 "bdev_lvol_rename", 00:05:34.843 "bdev_lvol_clone_bdev", 00:05:34.843 "bdev_lvol_clone", 00:05:34.843 "bdev_lvol_snapshot", 00:05:34.843 "bdev_lvol_create", 00:05:34.843 "bdev_lvol_delete_lvstore", 00:05:34.843 "bdev_lvol_rename_lvstore", 00:05:34.843 "bdev_lvol_create_lvstore", 00:05:34.843 "bdev_raid_set_options", 00:05:34.843 "bdev_raid_remove_base_bdev", 00:05:34.843 "bdev_raid_add_base_bdev", 00:05:34.843 "bdev_raid_delete", 00:05:34.843 "bdev_raid_create", 00:05:34.843 "bdev_raid_get_bdevs", 00:05:34.843 "bdev_error_inject_error", 00:05:34.843 "bdev_error_delete", 00:05:34.843 "bdev_error_create", 00:05:34.843 "bdev_split_delete", 00:05:34.843 "bdev_split_create", 00:05:34.843 "bdev_delay_delete", 00:05:34.843 "bdev_delay_create", 00:05:34.843 "bdev_delay_update_latency", 00:05:34.843 "bdev_zone_block_delete", 00:05:34.843 "bdev_zone_block_create", 00:05:34.843 "blobfs_create", 00:05:34.843 "blobfs_detect", 00:05:34.843 "blobfs_set_cache_size", 00:05:34.843 "bdev_aio_delete", 00:05:34.843 "bdev_aio_rescan", 00:05:34.843 "bdev_aio_create", 00:05:34.843 "bdev_ftl_set_property", 00:05:34.843 "bdev_ftl_get_properties", 00:05:34.843 "bdev_ftl_get_stats", 00:05:34.843 "bdev_ftl_unmap", 00:05:34.843 "bdev_ftl_unload", 00:05:34.843 "bdev_ftl_delete", 00:05:34.843 "bdev_ftl_load", 00:05:34.843 "bdev_ftl_create", 00:05:34.843 "bdev_virtio_attach_controller", 00:05:34.843 "bdev_virtio_scsi_get_devices", 00:05:34.843 "bdev_virtio_detach_controller", 00:05:34.843 "bdev_virtio_blk_set_hotplug", 00:05:34.843 "bdev_iscsi_delete", 00:05:34.843 "bdev_iscsi_create", 00:05:34.843 "bdev_iscsi_set_options", 00:05:34.843 "accel_error_inject_error", 00:05:34.843 "ioat_scan_accel_module", 00:05:34.843 "dsa_scan_accel_module", 00:05:34.843 "iaa_scan_accel_module", 00:05:34.843 "vfu_virtio_create_fs_endpoint", 00:05:34.843 "vfu_virtio_create_scsi_endpoint", 00:05:34.843 "vfu_virtio_scsi_remove_target", 00:05:34.843 "vfu_virtio_scsi_add_target", 00:05:34.843 "vfu_virtio_create_blk_endpoint", 00:05:34.843 "vfu_virtio_delete_endpoint", 00:05:34.843 "keyring_file_remove_key", 00:05:34.843 "keyring_file_add_key", 00:05:34.843 "keyring_linux_set_options", 00:05:34.843 "fsdev_aio_delete", 00:05:34.843 "fsdev_aio_create", 00:05:34.843 "iscsi_get_histogram", 00:05:34.843 "iscsi_enable_histogram", 00:05:34.843 "iscsi_set_options", 00:05:34.843 "iscsi_get_auth_groups", 00:05:34.843 "iscsi_auth_group_remove_secret", 00:05:34.843 "iscsi_auth_group_add_secret", 00:05:34.843 "iscsi_delete_auth_group", 00:05:34.843 "iscsi_create_auth_group", 00:05:34.843 "iscsi_set_discovery_auth", 00:05:34.843 "iscsi_get_options", 00:05:34.843 "iscsi_target_node_request_logout", 00:05:34.843 "iscsi_target_node_set_redirect", 00:05:34.843 "iscsi_target_node_set_auth", 00:05:34.843 "iscsi_target_node_add_lun", 00:05:34.843 "iscsi_get_stats", 00:05:34.843 "iscsi_get_connections", 00:05:34.843 "iscsi_portal_group_set_auth", 00:05:34.843 "iscsi_start_portal_group", 00:05:34.843 "iscsi_delete_portal_group", 00:05:34.843 "iscsi_create_portal_group", 00:05:34.843 "iscsi_get_portal_groups", 00:05:34.843 "iscsi_delete_target_node", 00:05:34.843 "iscsi_target_node_remove_pg_ig_maps", 00:05:34.843 "iscsi_target_node_add_pg_ig_maps", 00:05:34.843 "iscsi_create_target_node", 00:05:34.843 "iscsi_get_target_nodes", 00:05:34.843 "iscsi_delete_initiator_group", 00:05:34.843 "iscsi_initiator_group_remove_initiators", 00:05:34.843 "iscsi_initiator_group_add_initiators", 00:05:34.843 "iscsi_create_initiator_group", 00:05:34.843 "iscsi_get_initiator_groups", 00:05:34.843 "nvmf_set_crdt", 00:05:34.843 "nvmf_set_config", 00:05:34.843 "nvmf_set_max_subsystems", 00:05:34.843 "nvmf_stop_mdns_prr", 00:05:34.843 "nvmf_publish_mdns_prr", 00:05:34.843 "nvmf_subsystem_get_listeners", 00:05:34.843 "nvmf_subsystem_get_qpairs", 00:05:34.843 "nvmf_subsystem_get_controllers", 00:05:34.843 "nvmf_get_stats", 00:05:34.843 "nvmf_get_transports", 00:05:34.843 "nvmf_create_transport", 00:05:34.843 "nvmf_get_targets", 00:05:34.843 "nvmf_delete_target", 00:05:34.843 "nvmf_create_target", 00:05:34.843 "nvmf_subsystem_allow_any_host", 00:05:34.843 "nvmf_subsystem_set_keys", 00:05:34.843 "nvmf_subsystem_remove_host", 00:05:34.843 "nvmf_subsystem_add_host", 00:05:34.843 "nvmf_ns_remove_host", 00:05:34.843 "nvmf_ns_add_host", 00:05:34.843 "nvmf_subsystem_remove_ns", 00:05:34.843 "nvmf_subsystem_set_ns_ana_group", 00:05:34.843 "nvmf_subsystem_add_ns", 00:05:34.843 "nvmf_subsystem_listener_set_ana_state", 00:05:34.843 "nvmf_discovery_get_referrals", 00:05:34.843 "nvmf_discovery_remove_referral", 00:05:34.843 "nvmf_discovery_add_referral", 00:05:34.843 "nvmf_subsystem_remove_listener", 00:05:34.843 "nvmf_subsystem_add_listener", 00:05:34.843 "nvmf_delete_subsystem", 00:05:34.843 "nvmf_create_subsystem", 00:05:34.843 "nvmf_get_subsystems", 00:05:34.843 "env_dpdk_get_mem_stats", 00:05:34.843 "nbd_get_disks", 00:05:34.843 "nbd_stop_disk", 00:05:34.843 "nbd_start_disk", 00:05:34.843 "ublk_recover_disk", 00:05:34.843 "ublk_get_disks", 00:05:34.843 "ublk_stop_disk", 00:05:34.843 "ublk_start_disk", 00:05:34.843 "ublk_destroy_target", 00:05:34.843 "ublk_create_target", 00:05:34.843 "virtio_blk_create_transport", 00:05:34.843 "virtio_blk_get_transports", 00:05:34.843 "vhost_controller_set_coalescing", 00:05:34.843 "vhost_get_controllers", 00:05:34.843 "vhost_delete_controller", 00:05:34.843 "vhost_create_blk_controller", 00:05:34.844 "vhost_scsi_controller_remove_target", 00:05:34.844 "vhost_scsi_controller_add_target", 00:05:34.844 "vhost_start_scsi_controller", 00:05:34.844 "vhost_create_scsi_controller", 00:05:34.844 "thread_set_cpumask", 00:05:34.844 "scheduler_set_options", 00:05:34.844 "framework_get_governor", 00:05:34.844 "framework_get_scheduler", 00:05:34.844 "framework_set_scheduler", 00:05:34.844 "framework_get_reactors", 00:05:34.844 "thread_get_io_channels", 00:05:34.844 "thread_get_pollers", 00:05:34.844 "thread_get_stats", 00:05:34.844 "framework_monitor_context_switch", 00:05:34.844 "spdk_kill_instance", 00:05:34.844 "log_enable_timestamps", 00:05:34.844 "log_get_flags", 00:05:34.844 "log_clear_flag", 00:05:34.844 "log_set_flag", 00:05:34.844 "log_get_level", 00:05:34.844 "log_set_level", 00:05:34.844 "log_get_print_level", 00:05:34.844 "log_set_print_level", 00:05:34.844 "framework_enable_cpumask_locks", 00:05:34.844 "framework_disable_cpumask_locks", 00:05:34.844 "framework_wait_init", 00:05:34.844 "framework_start_init", 00:05:34.844 "scsi_get_devices", 00:05:34.844 "bdev_get_histogram", 00:05:34.844 "bdev_enable_histogram", 00:05:34.844 "bdev_set_qos_limit", 00:05:34.844 "bdev_set_qd_sampling_period", 00:05:34.844 "bdev_get_bdevs", 00:05:34.844 "bdev_reset_iostat", 00:05:34.844 "bdev_get_iostat", 00:05:34.844 "bdev_examine", 00:05:34.844 "bdev_wait_for_examine", 00:05:34.844 "bdev_set_options", 00:05:34.844 "accel_get_stats", 00:05:34.844 "accel_set_options", 00:05:34.844 "accel_set_driver", 00:05:34.844 "accel_crypto_key_destroy", 00:05:34.844 "accel_crypto_keys_get", 00:05:34.844 "accel_crypto_key_create", 00:05:34.844 "accel_assign_opc", 00:05:34.844 "accel_get_module_info", 00:05:34.844 "accel_get_opc_assignments", 00:05:34.844 "vmd_rescan", 00:05:34.844 "vmd_remove_device", 00:05:34.844 "vmd_enable", 00:05:34.844 "sock_get_default_impl", 00:05:34.844 "sock_set_default_impl", 00:05:34.844 "sock_impl_set_options", 00:05:34.844 "sock_impl_get_options", 00:05:34.844 "iobuf_get_stats", 00:05:34.844 "iobuf_set_options", 00:05:34.844 "keyring_get_keys", 00:05:34.844 "vfu_tgt_set_base_path", 00:05:34.844 "framework_get_pci_devices", 00:05:34.844 "framework_get_config", 00:05:34.844 "framework_get_subsystems", 00:05:34.844 "fsdev_set_opts", 00:05:34.844 "fsdev_get_opts", 00:05:34.844 "trace_get_info", 00:05:34.844 "trace_get_tpoint_group_mask", 00:05:34.844 "trace_disable_tpoint_group", 00:05:34.844 "trace_enable_tpoint_group", 00:05:34.844 "trace_clear_tpoint_mask", 00:05:34.844 "trace_set_tpoint_mask", 00:05:34.844 "notify_get_notifications", 00:05:34.844 "notify_get_types", 00:05:34.844 "spdk_get_version", 00:05:34.844 "rpc_get_methods" 00:05:34.844 ] 00:05:34.844 02:27:05 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:34.844 02:27:05 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:34.844 02:27:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.101 02:27:05 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:35.101 02:27:05 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 779013 00:05:35.101 02:27:05 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 779013 ']' 00:05:35.101 02:27:05 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 779013 00:05:35.101 02:27:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:35.101 02:27:05 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.101 02:27:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779013 00:05:35.101 02:27:05 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.101 02:27:05 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.101 02:27:05 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779013' 00:05:35.101 killing process with pid 779013 00:05:35.101 02:27:05 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 779013 00:05:35.101 02:27:05 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 779013 00:05:35.360 00:05:35.360 real 0m1.116s 00:05:35.360 user 0m1.877s 00:05:35.360 sys 0m0.449s 00:05:35.360 02:27:05 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.360 02:27:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:35.360 ************************************ 00:05:35.360 END TEST spdkcli_tcp 00:05:35.360 ************************************ 00:05:35.360 02:27:05 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:35.360 02:27:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.360 02:27:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.360 02:27:05 -- common/autotest_common.sh@10 -- # set +x 00:05:35.360 ************************************ 00:05:35.360 START TEST dpdk_mem_utility 00:05:35.360 ************************************ 00:05:35.360 02:27:05 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:35.619 * Looking for test storage... 00:05:35.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:35.619 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.619 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:35.619 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.619 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.619 02:27:06 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:35.620 02:27:06 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.620 02:27:06 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.620 02:27:06 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.620 02:27:06 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:35.620 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.620 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:35.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.620 --rc genhtml_branch_coverage=1 00:05:35.620 --rc genhtml_function_coverage=1 00:05:35.620 --rc genhtml_legend=1 00:05:35.620 --rc geninfo_all_blocks=1 00:05:35.620 --rc geninfo_unexecuted_blocks=1 00:05:35.620 00:05:35.620 ' 00:05:35.620 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:35.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.620 --rc genhtml_branch_coverage=1 00:05:35.620 --rc genhtml_function_coverage=1 00:05:35.620 --rc genhtml_legend=1 00:05:35.620 --rc geninfo_all_blocks=1 00:05:35.620 --rc geninfo_unexecuted_blocks=1 00:05:35.620 00:05:35.620 ' 00:05:35.620 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:35.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.620 --rc genhtml_branch_coverage=1 00:05:35.620 --rc genhtml_function_coverage=1 00:05:35.620 --rc genhtml_legend=1 00:05:35.620 --rc geninfo_all_blocks=1 00:05:35.620 --rc geninfo_unexecuted_blocks=1 00:05:35.620 00:05:35.620 ' 00:05:35.620 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:35.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.620 --rc genhtml_branch_coverage=1 00:05:35.620 --rc genhtml_function_coverage=1 00:05:35.620 --rc genhtml_legend=1 00:05:35.620 --rc geninfo_all_blocks=1 00:05:35.620 --rc geninfo_unexecuted_blocks=1 00:05:35.620 00:05:35.620 ' 00:05:35.620 02:27:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:35.620 02:27:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=779209 00:05:35.620 02:27:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 779209 00:05:35.620 02:27:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:35.620 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 779209 ']' 00:05:35.620 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.620 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.620 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.620 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.620 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.620 [2024-12-16 02:27:06.174914] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:35.620 [2024-12-16 02:27:06.174967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779209 ] 00:05:35.620 [2024-12-16 02:27:06.252619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.620 [2024-12-16 02:27:06.276255] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.879 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.879 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:35.879 02:27:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:35.879 02:27:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:35.879 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.879 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:35.879 { 00:05:35.879 "filename": "/tmp/spdk_mem_dump.txt" 00:05:35.879 } 00:05:35.879 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.879 02:27:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:36.138 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:36.138 1 heaps totaling size 818.000000 MiB 00:05:36.138 size: 818.000000 MiB heap id: 0 00:05:36.138 end heaps---------- 00:05:36.138 9 mempools totaling size 603.782043 MiB 00:05:36.138 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:36.138 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:36.138 size: 100.555481 MiB name: bdev_io_779209 00:05:36.138 size: 50.003479 MiB name: msgpool_779209 00:05:36.138 size: 36.509338 MiB name: fsdev_io_779209 00:05:36.138 size: 21.763794 MiB name: PDU_Pool 00:05:36.138 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:36.138 size: 4.133484 MiB name: evtpool_779209 00:05:36.138 size: 0.026123 MiB name: Session_Pool 00:05:36.138 end mempools------- 00:05:36.138 6 memzones totaling size 4.142822 MiB 00:05:36.138 size: 1.000366 MiB name: RG_ring_0_779209 00:05:36.138 size: 1.000366 MiB name: RG_ring_1_779209 00:05:36.138 size: 1.000366 MiB name: RG_ring_4_779209 00:05:36.138 size: 1.000366 MiB name: RG_ring_5_779209 00:05:36.138 size: 0.125366 MiB name: RG_ring_2_779209 00:05:36.138 size: 0.015991 MiB name: RG_ring_3_779209 00:05:36.138 end memzones------- 00:05:36.138 02:27:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:36.138 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:36.138 list of free elements. size: 10.852478 MiB 00:05:36.138 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:36.138 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:36.138 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:36.138 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:36.138 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:36.138 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:36.138 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:36.138 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:36.138 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:36.138 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:36.138 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:36.138 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:36.138 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:36.138 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:36.138 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:36.138 list of standard malloc elements. size: 199.218628 MiB 00:05:36.138 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:36.138 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:36.139 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:36.139 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:36.139 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:36.139 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:36.139 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:36.139 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:36.139 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:36.139 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:36.139 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:36.139 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:36.139 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:36.139 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:36.139 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:36.139 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:36.139 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:36.139 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:36.139 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:36.139 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:36.139 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:36.139 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:36.139 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:36.139 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:36.139 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:36.139 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:36.139 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:36.139 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:36.139 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:36.139 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:36.139 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:36.139 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:36.139 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:36.139 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:36.139 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:36.139 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:36.139 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:36.139 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:36.139 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:36.139 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:36.139 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:36.139 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:36.139 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:36.139 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:36.139 list of memzone associated elements. size: 607.928894 MiB 00:05:36.139 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:36.139 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:36.139 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:36.139 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:36.139 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:36.139 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_779209_0 00:05:36.139 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:36.139 associated memzone info: size: 48.002930 MiB name: MP_msgpool_779209_0 00:05:36.139 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:36.139 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_779209_0 00:05:36.139 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:36.139 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:36.139 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:36.139 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:36.139 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:36.139 associated memzone info: size: 3.000122 MiB name: MP_evtpool_779209_0 00:05:36.139 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:36.139 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_779209 00:05:36.139 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:36.139 associated memzone info: size: 1.007996 MiB name: MP_evtpool_779209 00:05:36.139 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:36.139 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:36.139 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:36.139 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:36.139 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:36.139 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:36.139 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:36.139 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:36.139 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:36.139 associated memzone info: size: 1.000366 MiB name: RG_ring_0_779209 00:05:36.139 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:36.139 associated memzone info: size: 1.000366 MiB name: RG_ring_1_779209 00:05:36.139 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:36.139 associated memzone info: size: 1.000366 MiB name: RG_ring_4_779209 00:05:36.139 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:36.139 associated memzone info: size: 1.000366 MiB name: RG_ring_5_779209 00:05:36.139 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:36.139 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_779209 00:05:36.139 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:36.139 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_779209 00:05:36.139 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:36.139 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:36.139 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:36.139 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:36.139 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:36.139 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:36.139 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:36.139 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_779209 00:05:36.139 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:36.139 associated memzone info: size: 0.125366 MiB name: RG_ring_2_779209 00:05:36.139 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:36.139 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:36.139 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:36.139 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:36.139 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:36.139 associated memzone info: size: 0.015991 MiB name: RG_ring_3_779209 00:05:36.139 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:36.139 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:36.139 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:36.139 associated memzone info: size: 0.000183 MiB name: MP_msgpool_779209 00:05:36.139 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:36.139 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_779209 00:05:36.139 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:36.139 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_779209 00:05:36.139 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:36.139 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:36.139 02:27:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:36.139 02:27:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 779209 00:05:36.139 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 779209 ']' 00:05:36.139 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 779209 00:05:36.139 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:36.139 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.139 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 779209 00:05:36.139 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.139 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.139 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 779209' 00:05:36.139 killing process with pid 779209 00:05:36.139 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 779209 00:05:36.139 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 779209 00:05:36.398 00:05:36.398 real 0m0.997s 00:05:36.398 user 0m0.960s 00:05:36.398 sys 0m0.402s 00:05:36.398 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.398 02:27:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:36.398 ************************************ 00:05:36.398 END TEST dpdk_mem_utility 00:05:36.398 ************************************ 00:05:36.398 02:27:06 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:36.398 02:27:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.398 02:27:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.398 02:27:06 -- common/autotest_common.sh@10 -- # set +x 00:05:36.398 ************************************ 00:05:36.398 START TEST event 00:05:36.398 ************************************ 00:05:36.398 02:27:07 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:36.656 * Looking for test storage... 00:05:36.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:36.656 02:27:07 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:36.656 02:27:07 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:36.656 02:27:07 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:36.656 02:27:07 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:36.656 02:27:07 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.656 02:27:07 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.656 02:27:07 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.656 02:27:07 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.656 02:27:07 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.656 02:27:07 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.656 02:27:07 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.656 02:27:07 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.656 02:27:07 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.656 02:27:07 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.656 02:27:07 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.656 02:27:07 event -- scripts/common.sh@344 -- # case "$op" in 00:05:36.656 02:27:07 event -- scripts/common.sh@345 -- # : 1 00:05:36.656 02:27:07 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.656 02:27:07 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.656 02:27:07 event -- scripts/common.sh@365 -- # decimal 1 00:05:36.656 02:27:07 event -- scripts/common.sh@353 -- # local d=1 00:05:36.656 02:27:07 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.656 02:27:07 event -- scripts/common.sh@355 -- # echo 1 00:05:36.656 02:27:07 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.656 02:27:07 event -- scripts/common.sh@366 -- # decimal 2 00:05:36.656 02:27:07 event -- scripts/common.sh@353 -- # local d=2 00:05:36.656 02:27:07 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.656 02:27:07 event -- scripts/common.sh@355 -- # echo 2 00:05:36.656 02:27:07 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.656 02:27:07 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.656 02:27:07 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.656 02:27:07 event -- scripts/common.sh@368 -- # return 0 00:05:36.656 02:27:07 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.656 02:27:07 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:36.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.656 --rc genhtml_branch_coverage=1 00:05:36.656 --rc genhtml_function_coverage=1 00:05:36.656 --rc genhtml_legend=1 00:05:36.656 --rc geninfo_all_blocks=1 00:05:36.656 --rc geninfo_unexecuted_blocks=1 00:05:36.656 00:05:36.656 ' 00:05:36.656 02:27:07 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:36.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.656 --rc genhtml_branch_coverage=1 00:05:36.656 --rc genhtml_function_coverage=1 00:05:36.656 --rc genhtml_legend=1 00:05:36.656 --rc geninfo_all_blocks=1 00:05:36.656 --rc geninfo_unexecuted_blocks=1 00:05:36.656 00:05:36.656 ' 00:05:36.656 02:27:07 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:36.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.656 --rc genhtml_branch_coverage=1 00:05:36.656 --rc genhtml_function_coverage=1 00:05:36.656 --rc genhtml_legend=1 00:05:36.656 --rc geninfo_all_blocks=1 00:05:36.656 --rc geninfo_unexecuted_blocks=1 00:05:36.656 00:05:36.656 ' 00:05:36.656 02:27:07 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:36.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.657 --rc genhtml_branch_coverage=1 00:05:36.657 --rc genhtml_function_coverage=1 00:05:36.657 --rc genhtml_legend=1 00:05:36.657 --rc geninfo_all_blocks=1 00:05:36.657 --rc geninfo_unexecuted_blocks=1 00:05:36.657 00:05:36.657 ' 00:05:36.657 02:27:07 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:36.657 02:27:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:36.657 02:27:07 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.657 02:27:07 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:36.657 02:27:07 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.657 02:27:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.657 ************************************ 00:05:36.657 START TEST event_perf 00:05:36.657 ************************************ 00:05:36.657 02:27:07 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.657 Running I/O for 1 seconds...[2024-12-16 02:27:07.243917] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:36.657 [2024-12-16 02:27:07.243984] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779765 ] 00:05:36.914 [2024-12-16 02:27:07.324533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.914 [2024-12-16 02:27:07.350770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.914 [2024-12-16 02:27:07.350913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.914 [2024-12-16 02:27:07.350950] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.914 [2024-12-16 02:27:07.350951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.847 Running I/O for 1 seconds... 00:05:37.847 lcore 0: 211210 00:05:37.847 lcore 1: 211210 00:05:37.847 lcore 2: 211211 00:05:37.847 lcore 3: 211210 00:05:37.847 done. 00:05:37.847 00:05:37.847 real 0m1.165s 00:05:37.847 user 0m4.071s 00:05:37.847 sys 0m0.089s 00:05:37.847 02:27:08 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.847 02:27:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:37.847 ************************************ 00:05:37.847 END TEST event_perf 00:05:37.847 ************************************ 00:05:37.847 02:27:08 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.847 02:27:08 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:37.847 02:27:08 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.847 02:27:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:37.847 ************************************ 00:05:37.847 START TEST event_reactor 00:05:37.847 ************************************ 00:05:37.847 02:27:08 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.847 [2024-12-16 02:27:08.475241] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:37.847 [2024-12-16 02:27:08.475313] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780125 ] 00:05:38.105 [2024-12-16 02:27:08.552023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.105 [2024-12-16 02:27:08.573389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.041 test_start 00:05:39.041 oneshot 00:05:39.041 tick 100 00:05:39.041 tick 100 00:05:39.041 tick 250 00:05:39.041 tick 100 00:05:39.041 tick 100 00:05:39.041 tick 100 00:05:39.041 tick 250 00:05:39.041 tick 500 00:05:39.041 tick 100 00:05:39.041 tick 100 00:05:39.041 tick 250 00:05:39.041 tick 100 00:05:39.041 tick 100 00:05:39.041 test_end 00:05:39.041 00:05:39.041 real 0m1.152s 00:05:39.041 user 0m1.079s 00:05:39.041 sys 0m0.070s 00:05:39.041 02:27:09 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.041 02:27:09 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:39.041 ************************************ 00:05:39.041 END TEST event_reactor 00:05:39.041 ************************************ 00:05:39.041 02:27:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.041 02:27:09 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:39.041 02:27:09 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.041 02:27:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.041 ************************************ 00:05:39.041 START TEST event_reactor_perf 00:05:39.041 ************************************ 00:05:39.041 02:27:09 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:39.041 [2024-12-16 02:27:09.697413] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:39.041 [2024-12-16 02:27:09.697482] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780370 ] 00:05:39.300 [2024-12-16 02:27:09.774563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.300 [2024-12-16 02:27:09.795093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.235 test_start 00:05:40.235 test_end 00:05:40.235 Performance: 498472 events per second 00:05:40.235 00:05:40.235 real 0m1.155s 00:05:40.235 user 0m1.070s 00:05:40.235 sys 0m0.080s 00:05:40.235 02:27:10 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.235 02:27:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:40.235 ************************************ 00:05:40.235 END TEST event_reactor_perf 00:05:40.235 ************************************ 00:05:40.235 02:27:10 event -- event/event.sh@49 -- # uname -s 00:05:40.235 02:27:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:40.235 02:27:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:40.235 02:27:10 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.235 02:27:10 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.235 02:27:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:40.495 ************************************ 00:05:40.495 START TEST event_scheduler 00:05:40.495 ************************************ 00:05:40.495 02:27:10 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:40.495 * Looking for test storage... 00:05:40.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:40.495 02:27:10 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.495 02:27:11 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.495 --rc genhtml_branch_coverage=1 00:05:40.495 --rc genhtml_function_coverage=1 00:05:40.495 --rc genhtml_legend=1 00:05:40.495 --rc geninfo_all_blocks=1 00:05:40.495 --rc geninfo_unexecuted_blocks=1 00:05:40.495 00:05:40.495 ' 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.495 --rc genhtml_branch_coverage=1 00:05:40.495 --rc genhtml_function_coverage=1 00:05:40.495 --rc genhtml_legend=1 00:05:40.495 --rc geninfo_all_blocks=1 00:05:40.495 --rc geninfo_unexecuted_blocks=1 00:05:40.495 00:05:40.495 ' 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.495 --rc genhtml_branch_coverage=1 00:05:40.495 --rc genhtml_function_coverage=1 00:05:40.495 --rc genhtml_legend=1 00:05:40.495 --rc geninfo_all_blocks=1 00:05:40.495 --rc geninfo_unexecuted_blocks=1 00:05:40.495 00:05:40.495 ' 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.495 --rc genhtml_branch_coverage=1 00:05:40.495 --rc genhtml_function_coverage=1 00:05:40.495 --rc genhtml_legend=1 00:05:40.495 --rc geninfo_all_blocks=1 00:05:40.495 --rc geninfo_unexecuted_blocks=1 00:05:40.495 00:05:40.495 ' 00:05:40.495 02:27:11 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:40.495 02:27:11 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=780645 00:05:40.495 02:27:11 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:40.495 02:27:11 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.495 02:27:11 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 780645 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 780645 ']' 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.495 02:27:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.496 [2024-12-16 02:27:11.128047] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:40.496 [2024-12-16 02:27:11.128092] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780645 ] 00:05:40.754 [2024-12-16 02:27:11.201180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.754 [2024-12-16 02:27:11.227281] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.754 [2024-12-16 02:27:11.227381] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.754 [2024-12-16 02:27:11.227449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.754 [2024-12-16 02:27:11.227450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:40.754 02:27:11 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.754 02:27:11 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:40.754 02:27:11 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:40.754 02:27:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.754 02:27:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.754 [2024-12-16 02:27:11.288106] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:40.754 [2024-12-16 02:27:11.288122] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:40.754 [2024-12-16 02:27:11.288130] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:40.754 [2024-12-16 02:27:11.288136] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:40.754 [2024-12-16 02:27:11.288140] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:40.754 02:27:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.754 02:27:11 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:40.754 02:27:11 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.754 02:27:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.754 [2024-12-16 02:27:11.358085] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:40.754 02:27:11 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.754 02:27:11 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:40.754 02:27:11 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.754 02:27:11 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.754 02:27:11 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:40.754 ************************************ 00:05:40.754 START TEST scheduler_create_thread 00:05:40.754 ************************************ 00:05:40.754 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:40.754 02:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:40.754 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.754 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:40.754 2 00:05:40.754 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:40.754 02:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:40.754 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:40.754 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.013 3 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.013 4 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.013 5 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.013 6 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.013 7 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.013 8 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.013 9 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.013 10 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.013 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.580 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:41.580 02:27:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:41.580 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:41.580 02:27:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.954 02:27:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:42.954 02:27:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:42.954 02:27:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:42.954 02:27:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:42.954 02:27:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.889 02:27:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:43.889 00:05:43.889 real 0m3.101s 00:05:43.889 user 0m0.025s 00:05:43.889 sys 0m0.006s 00:05:43.889 02:27:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:43.889 02:27:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:43.889 ************************************ 00:05:43.889 END TEST scheduler_create_thread 00:05:43.889 ************************************ 00:05:43.889 02:27:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:43.889 02:27:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 780645 00:05:43.889 02:27:14 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 780645 ']' 00:05:43.889 02:27:14 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 780645 00:05:43.889 02:27:14 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:43.889 02:27:14 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.889 02:27:14 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 780645 00:05:44.147 02:27:14 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:44.147 02:27:14 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:44.147 02:27:14 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 780645' 00:05:44.147 killing process with pid 780645 00:05:44.147 02:27:14 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 780645 00:05:44.147 02:27:14 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 780645 00:05:44.405 [2024-12-16 02:27:14.877394] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:44.405 00:05:44.405 real 0m4.152s 00:05:44.405 user 0m6.679s 00:05:44.405 sys 0m0.391s 00:05:44.405 02:27:15 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.405 02:27:15 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:44.405 ************************************ 00:05:44.405 END TEST event_scheduler 00:05:44.405 ************************************ 00:05:44.662 02:27:15 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:44.662 02:27:15 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:44.662 02:27:15 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.662 02:27:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.662 02:27:15 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.662 ************************************ 00:05:44.662 START TEST app_repeat 00:05:44.662 ************************************ 00:05:44.662 02:27:15 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@19 -- # repeat_pid=781383 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 781383' 00:05:44.662 Process app_repeat pid: 781383 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:44.662 spdk_app_start Round 0 00:05:44.662 02:27:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 781383 /var/tmp/spdk-nbd.sock 00:05:44.662 02:27:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781383 ']' 00:05:44.662 02:27:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.662 02:27:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.662 02:27:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.663 02:27:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.663 02:27:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.663 [2024-12-16 02:27:15.168263] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:44.663 [2024-12-16 02:27:15.168315] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781383 ] 00:05:44.663 [2024-12-16 02:27:15.242228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.663 [2024-12-16 02:27:15.264481] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.663 [2024-12-16 02:27:15.264483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.920 02:27:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.920 02:27:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:44.920 02:27:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.920 Malloc0 00:05:44.920 02:27:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.178 Malloc1 00:05:45.178 02:27:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.178 02:27:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.435 /dev/nbd0 00:05:45.435 02:27:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.435 02:27:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.435 1+0 records in 00:05:45.435 1+0 records out 00:05:45.435 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000224922 s, 18.2 MB/s 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.435 02:27:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.435 02:27:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.435 02:27:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.435 02:27:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:45.693 /dev/nbd1 00:05:45.693 02:27:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:45.693 02:27:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.693 1+0 records in 00:05:45.693 1+0 records out 00:05:45.693 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000147742 s, 27.7 MB/s 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.693 02:27:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.693 02:27:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.693 02:27:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.693 02:27:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.693 02:27:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.693 02:27:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.951 { 00:05:45.951 "nbd_device": "/dev/nbd0", 00:05:45.951 "bdev_name": "Malloc0" 00:05:45.951 }, 00:05:45.951 { 00:05:45.951 "nbd_device": "/dev/nbd1", 00:05:45.951 "bdev_name": "Malloc1" 00:05:45.951 } 00:05:45.951 ]' 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.951 { 00:05:45.951 "nbd_device": "/dev/nbd0", 00:05:45.951 "bdev_name": "Malloc0" 00:05:45.951 }, 00:05:45.951 { 00:05:45.951 "nbd_device": "/dev/nbd1", 00:05:45.951 "bdev_name": "Malloc1" 00:05:45.951 } 00:05:45.951 ]' 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.951 /dev/nbd1' 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.951 /dev/nbd1' 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.951 256+0 records in 00:05:45.951 256+0 records out 00:05:45.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010004 s, 105 MB/s 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.951 256+0 records in 00:05:45.951 256+0 records out 00:05:45.951 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139571 s, 75.1 MB/s 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.951 02:27:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.951 256+0 records in 00:05:45.951 256+0 records out 00:05:45.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145279 s, 72.2 MB/s 00:05:45.952 02:27:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.952 02:27:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.952 02:27:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.952 02:27:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.952 02:27:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.209 02:27:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.210 02:27:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.210 02:27:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.210 02:27:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.210 02:27:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.210 02:27:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.467 02:27:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.467 02:27:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.467 02:27:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.467 02:27:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.467 02:27:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.468 02:27:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.468 02:27:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.468 02:27:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.468 02:27:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.468 02:27:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.468 02:27:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.725 02:27:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.725 02:27:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.725 02:27:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.725 02:27:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.725 02:27:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.725 02:27:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.725 02:27:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.725 02:27:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.725 02:27:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.725 02:27:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.725 02:27:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.725 02:27:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.725 02:27:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.984 02:27:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.242 [2024-12-16 02:27:17.646883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.242 [2024-12-16 02:27:17.666935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.242 [2024-12-16 02:27:17.666935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.242 [2024-12-16 02:27:17.706486] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.242 [2024-12-16 02:27:17.706526] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.522 02:27:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.522 02:27:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:50.522 spdk_app_start Round 1 00:05:50.522 02:27:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 781383 /var/tmp/spdk-nbd.sock 00:05:50.522 02:27:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781383 ']' 00:05:50.522 02:27:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.522 02:27:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.522 02:27:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.522 02:27:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.522 02:27:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.522 02:27:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.522 02:27:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:50.522 02:27:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.522 Malloc0 00:05:50.522 02:27:20 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.522 Malloc1 00:05:50.522 02:27:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.522 02:27:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.779 /dev/nbd0 00:05:50.779 02:27:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:50.779 02:27:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:50.779 1+0 records in 00:05:50.779 1+0 records out 00:05:50.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199948 s, 20.5 MB/s 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:50.779 02:27:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:50.779 02:27:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:50.779 02:27:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.779 02:27:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.036 /dev/nbd1 00:05:51.036 02:27:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.036 02:27:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.036 1+0 records in 00:05:51.036 1+0 records out 00:05:51.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246622 s, 16.6 MB/s 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.036 02:27:21 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.036 02:27:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.036 02:27:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.036 02:27:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.036 02:27:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.036 02:27:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.294 { 00:05:51.294 "nbd_device": "/dev/nbd0", 00:05:51.294 "bdev_name": "Malloc0" 00:05:51.294 }, 00:05:51.294 { 00:05:51.294 "nbd_device": "/dev/nbd1", 00:05:51.294 "bdev_name": "Malloc1" 00:05:51.294 } 00:05:51.294 ]' 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.294 { 00:05:51.294 "nbd_device": "/dev/nbd0", 00:05:51.294 "bdev_name": "Malloc0" 00:05:51.294 }, 00:05:51.294 { 00:05:51.294 "nbd_device": "/dev/nbd1", 00:05:51.294 "bdev_name": "Malloc1" 00:05:51.294 } 00:05:51.294 ]' 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.294 /dev/nbd1' 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.294 /dev/nbd1' 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.294 256+0 records in 00:05:51.294 256+0 records out 00:05:51.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010629 s, 98.7 MB/s 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.294 256+0 records in 00:05:51.294 256+0 records out 00:05:51.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013933 s, 75.3 MB/s 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.294 256+0 records in 00:05:51.294 256+0 records out 00:05:51.294 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150197 s, 69.8 MB/s 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.294 02:27:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.552 02:27:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.552 02:27:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.552 02:27:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.552 02:27:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.552 02:27:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.552 02:27:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.552 02:27:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.552 02:27:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.552 02:27:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.552 02:27:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.810 02:27:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.810 02:27:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.810 02:27:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.810 02:27:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.810 02:27:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.810 02:27:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.810 02:27:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.810 02:27:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.810 02:27:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.810 02:27:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.810 02:27:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.068 02:27:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.068 02:27:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.068 02:27:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.068 02:27:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.068 02:27:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.068 02:27:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.068 02:27:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.068 02:27:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.068 02:27:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.068 02:27:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.068 02:27:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.068 02:27:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.068 02:27:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.326 02:27:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.584 [2024-12-16 02:27:22.997510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.584 [2024-12-16 02:27:23.017563] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.584 [2024-12-16 02:27:23.017564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.584 [2024-12-16 02:27:23.058617] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.584 [2024-12-16 02:27:23.058657] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.863 02:27:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:55.863 02:27:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:55.863 spdk_app_start Round 2 00:05:55.863 02:27:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 781383 /var/tmp/spdk-nbd.sock 00:05:55.863 02:27:25 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781383 ']' 00:05:55.863 02:27:25 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.863 02:27:25 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.863 02:27:25 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.863 02:27:25 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.863 02:27:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.863 02:27:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.863 02:27:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:55.863 02:27:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.863 Malloc0 00:05:55.863 02:27:26 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.863 Malloc1 00:05:55.863 02:27:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.863 02:27:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.120 /dev/nbd0 00:05:56.120 02:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.120 02:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.120 02:27:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:56.120 02:27:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.120 02:27:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.120 02:27:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.120 02:27:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:56.120 02:27:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.120 02:27:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.120 02:27:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.120 02:27:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.120 1+0 records in 00:05:56.120 1+0 records out 00:05:56.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202332 s, 20.2 MB/s 00:05:56.120 02:27:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.120 02:27:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.120 02:27:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.120 02:27:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.121 02:27:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.121 02:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.121 02:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.121 02:27:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.378 /dev/nbd1 00:05:56.378 02:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.378 02:27:26 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.378 1+0 records in 00:05:56.378 1+0 records out 00:05:56.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199159 s, 20.6 MB/s 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.378 02:27:26 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:56.378 02:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.378 02:27:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.378 02:27:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.378 02:27:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.378 02:27:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.635 { 00:05:56.635 "nbd_device": "/dev/nbd0", 00:05:56.635 "bdev_name": "Malloc0" 00:05:56.635 }, 00:05:56.635 { 00:05:56.635 "nbd_device": "/dev/nbd1", 00:05:56.635 "bdev_name": "Malloc1" 00:05:56.635 } 00:05:56.635 ]' 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.635 { 00:05:56.635 "nbd_device": "/dev/nbd0", 00:05:56.635 "bdev_name": "Malloc0" 00:05:56.635 }, 00:05:56.635 { 00:05:56.635 "nbd_device": "/dev/nbd1", 00:05:56.635 "bdev_name": "Malloc1" 00:05:56.635 } 00:05:56.635 ]' 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.635 /dev/nbd1' 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.635 /dev/nbd1' 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.635 256+0 records in 00:05:56.635 256+0 records out 00:05:56.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102223 s, 103 MB/s 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.635 256+0 records in 00:05:56.635 256+0 records out 00:05:56.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138098 s, 75.9 MB/s 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.635 256+0 records in 00:05:56.635 256+0 records out 00:05:56.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144266 s, 72.7 MB/s 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.635 02:27:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.892 02:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.892 02:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.892 02:27:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.892 02:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.892 02:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.892 02:27:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.892 02:27:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:56.892 02:27:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.892 02:27:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.893 02:27:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.149 02:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.149 02:27:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.149 02:27:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.149 02:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.149 02:27:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.149 02:27:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.149 02:27:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:57.149 02:27:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.149 02:27:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.149 02:27:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.149 02:27:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.406 02:27:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.406 02:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.406 02:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.406 02:27:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.406 02:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.406 02:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.406 02:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:57.406 02:27:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.406 02:27:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.406 02:27:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.406 02:27:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.406 02:27:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.406 02:27:27 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.664 02:27:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:57.664 [2024-12-16 02:27:28.303529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.664 [2024-12-16 02:27:28.323713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.664 [2024-12-16 02:27:28.323713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.922 [2024-12-16 02:27:28.364836] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.922 [2024-12-16 02:27:28.364878] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:01.199 02:27:31 event.app_repeat -- event/event.sh@38 -- # waitforlisten 781383 /var/tmp/spdk-nbd.sock 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 781383 ']' 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:01.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:01.199 02:27:31 event.app_repeat -- event/event.sh@39 -- # killprocess 781383 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 781383 ']' 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 781383 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 781383 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.199 02:27:31 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.200 02:27:31 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 781383' 00:06:01.200 killing process with pid 781383 00:06:01.200 02:27:31 event.app_repeat -- common/autotest_common.sh@973 -- # kill 781383 00:06:01.200 02:27:31 event.app_repeat -- common/autotest_common.sh@978 -- # wait 781383 00:06:01.200 spdk_app_start is called in Round 0. 00:06:01.200 Shutdown signal received, stop current app iteration 00:06:01.200 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:06:01.200 spdk_app_start is called in Round 1. 00:06:01.200 Shutdown signal received, stop current app iteration 00:06:01.200 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:06:01.200 spdk_app_start is called in Round 2. 00:06:01.200 Shutdown signal received, stop current app iteration 00:06:01.200 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:06:01.200 spdk_app_start is called in Round 3. 00:06:01.200 Shutdown signal received, stop current app iteration 00:06:01.200 02:27:31 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:01.200 02:27:31 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:01.200 00:06:01.200 real 0m16.401s 00:06:01.200 user 0m36.201s 00:06:01.200 sys 0m2.524s 00:06:01.200 02:27:31 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.200 02:27:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:01.200 ************************************ 00:06:01.200 END TEST app_repeat 00:06:01.200 ************************************ 00:06:01.200 02:27:31 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:01.200 02:27:31 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:01.200 02:27:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.200 02:27:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.200 02:27:31 event -- common/autotest_common.sh@10 -- # set +x 00:06:01.200 ************************************ 00:06:01.200 START TEST cpu_locks 00:06:01.200 ************************************ 00:06:01.200 02:27:31 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:01.200 * Looking for test storage... 00:06:01.200 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:01.200 02:27:31 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:01.200 02:27:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:01.200 02:27:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:01.200 02:27:31 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:01.200 02:27:31 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:01.200 02:27:31 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.200 02:27:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:01.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.200 --rc genhtml_branch_coverage=1 00:06:01.200 --rc genhtml_function_coverage=1 00:06:01.200 --rc genhtml_legend=1 00:06:01.200 --rc geninfo_all_blocks=1 00:06:01.200 --rc geninfo_unexecuted_blocks=1 00:06:01.200 00:06:01.200 ' 00:06:01.200 02:27:31 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:01.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.200 --rc genhtml_branch_coverage=1 00:06:01.200 --rc genhtml_function_coverage=1 00:06:01.200 --rc genhtml_legend=1 00:06:01.200 --rc geninfo_all_blocks=1 00:06:01.200 --rc geninfo_unexecuted_blocks=1 00:06:01.200 00:06:01.200 ' 00:06:01.200 02:27:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:01.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.200 --rc genhtml_branch_coverage=1 00:06:01.200 --rc genhtml_function_coverage=1 00:06:01.200 --rc genhtml_legend=1 00:06:01.200 --rc geninfo_all_blocks=1 00:06:01.200 --rc geninfo_unexecuted_blocks=1 00:06:01.200 00:06:01.200 ' 00:06:01.200 02:27:31 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:01.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.200 --rc genhtml_branch_coverage=1 00:06:01.200 --rc genhtml_function_coverage=1 00:06:01.200 --rc genhtml_legend=1 00:06:01.200 --rc geninfo_all_blocks=1 00:06:01.200 --rc geninfo_unexecuted_blocks=1 00:06:01.200 00:06:01.200 ' 00:06:01.200 02:27:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:01.200 02:27:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:01.200 02:27:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:01.200 02:27:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:01.200 02:27:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.200 02:27:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.200 02:27:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.200 ************************************ 00:06:01.200 START TEST default_locks 00:06:01.200 ************************************ 00:06:01.200 02:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:01.200 02:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=784309 00:06:01.200 02:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 784309 00:06:01.200 02:27:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:01.200 02:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 784309 ']' 00:06:01.200 02:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.200 02:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.200 02:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.200 02:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.200 02:27:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.200 [2024-12-16 02:27:31.857824] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:01.200 [2024-12-16 02:27:31.857872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784309 ] 00:06:01.459 [2024-12-16 02:27:31.928996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.459 [2024-12-16 02:27:31.951122] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.718 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.718 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:01.718 02:27:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 784309 00:06:01.718 02:27:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 784309 00:06:01.718 02:27:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:01.976 lslocks: write error 00:06:01.976 02:27:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 784309 00:06:01.976 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 784309 ']' 00:06:01.976 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 784309 00:06:01.976 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:02.234 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.234 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784309 00:06:02.234 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.234 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.234 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784309' 00:06:02.234 killing process with pid 784309 00:06:02.234 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 784309 00:06:02.234 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 784309 00:06:02.493 02:27:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 784309 00:06:02.493 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:02.493 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 784309 00:06:02.493 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:02.493 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.493 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:02.493 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:02.493 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 784309 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 784309 ']' 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (784309) - No such process 00:06:02.494 ERROR: process (pid: 784309) is no longer running 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:02.494 00:06:02.494 real 0m1.180s 00:06:02.494 user 0m1.150s 00:06:02.494 sys 0m0.542s 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:02.494 02:27:32 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.494 ************************************ 00:06:02.494 END TEST default_locks 00:06:02.494 ************************************ 00:06:02.494 02:27:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:02.494 02:27:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:02.494 02:27:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:02.494 02:27:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:02.494 ************************************ 00:06:02.494 START TEST default_locks_via_rpc 00:06:02.494 ************************************ 00:06:02.494 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:02.494 02:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=784565 00:06:02.494 02:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 784565 00:06:02.494 02:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.494 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 784565 ']' 00:06:02.494 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.494 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.494 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.494 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.494 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:02.494 [2024-12-16 02:27:33.111613] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:02.494 [2024-12-16 02:27:33.111659] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784565 ] 00:06:02.752 [2024-12-16 02:27:33.188234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.752 [2024-12-16 02:27:33.208106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 784565 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 784565 00:06:03.010 02:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.269 02:27:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 784565 00:06:03.269 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 784565 ']' 00:06:03.269 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 784565 00:06:03.269 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:03.269 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.269 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784565 00:06:03.269 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.269 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.269 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784565' 00:06:03.269 killing process with pid 784565 00:06:03.269 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 784565 00:06:03.269 02:27:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 784565 00:06:03.528 00:06:03.528 real 0m1.025s 00:06:03.528 user 0m0.990s 00:06:03.528 sys 0m0.466s 00:06:03.528 02:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.528 02:27:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:03.528 ************************************ 00:06:03.528 END TEST default_locks_via_rpc 00:06:03.528 ************************************ 00:06:03.528 02:27:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:03.528 02:27:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.528 02:27:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.528 02:27:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.528 ************************************ 00:06:03.528 START TEST non_locking_app_on_locked_coremask 00:06:03.528 ************************************ 00:06:03.528 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:03.528 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=784815 00:06:03.528 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 784815 /var/tmp/spdk.sock 00:06:03.528 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.528 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 784815 ']' 00:06:03.528 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.528 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:03.528 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.528 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:03.528 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.787 [2024-12-16 02:27:34.203979] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:03.787 [2024-12-16 02:27:34.204021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784815 ] 00:06:03.787 [2024-12-16 02:27:34.279460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.787 [2024-12-16 02:27:34.302141] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.045 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.045 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.045 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=784821 00:06:04.045 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 784821 /var/tmp/spdk2.sock 00:06:04.045 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:04.045 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 784821 ']' 00:06:04.045 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.045 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.045 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.045 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.045 02:27:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.045 [2024-12-16 02:27:34.558389] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:04.045 [2024-12-16 02:27:34.558434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784821 ] 00:06:04.045 [2024-12-16 02:27:34.644863] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:04.045 [2024-12-16 02:27:34.644886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.045 [2024-12-16 02:27:34.690955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.990 02:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.990 02:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.990 02:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 784815 00:06:04.990 02:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 784815 00:06:04.990 02:27:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.556 lslocks: write error 00:06:05.556 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 784815 00:06:05.556 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 784815 ']' 00:06:05.556 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 784815 00:06:05.556 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.556 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.556 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784815 00:06:05.556 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.556 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.556 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784815' 00:06:05.556 killing process with pid 784815 00:06:05.556 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 784815 00:06:05.556 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 784815 00:06:06.123 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 784821 00:06:06.123 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 784821 ']' 00:06:06.123 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 784821 00:06:06.123 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:06.123 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:06.123 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784821 00:06:06.123 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:06.123 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:06.123 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784821' 00:06:06.123 killing process with pid 784821 00:06:06.123 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 784821 00:06:06.123 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 784821 00:06:06.382 00:06:06.382 real 0m2.837s 00:06:06.382 user 0m2.967s 00:06:06.382 sys 0m0.955s 00:06:06.382 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.382 02:27:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.382 ************************************ 00:06:06.382 END TEST non_locking_app_on_locked_coremask 00:06:06.382 ************************************ 00:06:06.382 02:27:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:06.382 02:27:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.382 02:27:37 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.382 02:27:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.641 ************************************ 00:06:06.641 START TEST locking_app_on_unlocked_coremask 00:06:06.641 ************************************ 00:06:06.641 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:06.641 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=785303 00:06:06.641 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 785303 /var/tmp/spdk.sock 00:06:06.641 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:06.641 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785303 ']' 00:06:06.641 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.641 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.641 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.641 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.641 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.641 [2024-12-16 02:27:37.110330] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:06.641 [2024-12-16 02:27:37.110373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785303 ] 00:06:06.641 [2024-12-16 02:27:37.185854] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:06.641 [2024-12-16 02:27:37.185877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.641 [2024-12-16 02:27:37.206502] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.900 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.900 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.900 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=785371 00:06:06.900 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 785371 /var/tmp/spdk2.sock 00:06:06.900 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:06.900 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785371 ']' 00:06:06.900 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.900 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.900 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.900 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.900 02:27:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.900 [2024-12-16 02:27:37.472128] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:06.900 [2024-12-16 02:27:37.472179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785371 ] 00:06:07.158 [2024-12-16 02:27:37.563131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.158 [2024-12-16 02:27:37.605356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.724 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.724 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:07.724 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 785371 00:06:07.724 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 785371 00:06:07.724 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:07.982 lslocks: write error 00:06:07.982 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 785303 00:06:07.982 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785303 ']' 00:06:07.982 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 785303 00:06:07.982 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:07.982 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.982 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785303 00:06:08.241 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.241 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.241 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785303' 00:06:08.241 killing process with pid 785303 00:06:08.241 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 785303 00:06:08.241 02:27:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 785303 00:06:08.868 02:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 785371 00:06:08.868 02:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785371 ']' 00:06:08.868 02:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 785371 00:06:08.868 02:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:08.868 02:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:08.868 02:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785371 00:06:08.868 02:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:08.868 02:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:08.868 02:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785371' 00:06:08.868 killing process with pid 785371 00:06:08.868 02:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 785371 00:06:08.868 02:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 785371 00:06:09.167 00:06:09.167 real 0m2.526s 00:06:09.167 user 0m2.659s 00:06:09.167 sys 0m0.851s 00:06:09.167 02:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.167 02:27:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.167 ************************************ 00:06:09.167 END TEST locking_app_on_unlocked_coremask 00:06:09.167 ************************************ 00:06:09.167 02:27:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:09.167 02:27:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:09.167 02:27:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:09.167 02:27:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:09.167 ************************************ 00:06:09.167 START TEST locking_app_on_locked_coremask 00:06:09.167 ************************************ 00:06:09.167 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:09.167 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=785788 00:06:09.167 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 785788 /var/tmp/spdk.sock 00:06:09.167 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:09.167 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785788 ']' 00:06:09.167 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.167 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.167 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.167 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.167 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.167 [2024-12-16 02:27:39.705233] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:09.167 [2024-12-16 02:27:39.705273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785788 ] 00:06:09.167 [2024-12-16 02:27:39.779920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.167 [2024-12-16 02:27:39.804942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.474 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.474 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:09.474 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=785798 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 785798 /var/tmp/spdk2.sock 00:06:09.474 02:27:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 785798 /var/tmp/spdk2.sock 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 785798 /var/tmp/spdk2.sock 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785798 ']' 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.474 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.474 [2024-12-16 02:27:40.050810] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:09.474 [2024-12-16 02:27:40.050865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785798 ] 00:06:09.746 [2024-12-16 02:27:40.140927] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 785788 has claimed it. 00:06:09.746 [2024-12-16 02:27:40.140964] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:10.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (785798) - No such process 00:06:10.313 ERROR: process (pid: 785798) is no longer running 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 785788 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 785788 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.313 lslocks: write error 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 785788 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785788 ']' 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 785788 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785788 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785788' 00:06:10.313 killing process with pid 785788 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 785788 00:06:10.313 02:27:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 785788 00:06:10.881 00:06:10.881 real 0m1.595s 00:06:10.881 user 0m1.721s 00:06:10.881 sys 0m0.532s 00:06:10.881 02:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.881 02:27:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.881 ************************************ 00:06:10.881 END TEST locking_app_on_locked_coremask 00:06:10.881 ************************************ 00:06:10.881 02:27:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:10.881 02:27:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.881 02:27:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.881 02:27:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.881 ************************************ 00:06:10.881 START TEST locking_overlapped_coremask 00:06:10.881 ************************************ 00:06:10.881 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:10.881 02:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=786071 00:06:10.881 02:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:10.881 02:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 786071 /var/tmp/spdk.sock 00:06:10.881 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 786071 ']' 00:06:10.881 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.881 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.881 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.881 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.881 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.881 [2024-12-16 02:27:41.358614] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:10.881 [2024-12-16 02:27:41.358652] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786071 ] 00:06:10.881 [2024-12-16 02:27:41.432569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.881 [2024-12-16 02:27:41.455449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.881 [2024-12-16 02:27:41.455560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.881 [2024-12-16 02:27:41.455560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=786231 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 786231 /var/tmp/spdk2.sock 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 786231 /var/tmp/spdk2.sock 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 786231 /var/tmp/spdk2.sock 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 786231 ']' 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:11.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.140 02:27:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.140 [2024-12-16 02:27:41.716614] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:11.140 [2024-12-16 02:27:41.716668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786231 ] 00:06:11.399 [2024-12-16 02:27:41.810818] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 786071 has claimed it. 00:06:11.399 [2024-12-16 02:27:41.810862] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:11.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (786231) - No such process 00:06:11.965 ERROR: process (pid: 786231) is no longer running 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 786071 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 786071 ']' 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 786071 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786071 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786071' 00:06:11.965 killing process with pid 786071 00:06:11.965 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 786071 00:06:11.966 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 786071 00:06:12.225 00:06:12.225 real 0m1.393s 00:06:12.225 user 0m3.887s 00:06:12.225 sys 0m0.407s 00:06:12.225 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.225 02:27:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.225 ************************************ 00:06:12.225 END TEST locking_overlapped_coremask 00:06:12.225 ************************************ 00:06:12.225 02:27:42 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:12.225 02:27:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.225 02:27:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.225 02:27:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.225 ************************************ 00:06:12.225 START TEST locking_overlapped_coremask_via_rpc 00:06:12.225 ************************************ 00:06:12.225 02:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:12.225 02:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=786337 00:06:12.225 02:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 786337 /var/tmp/spdk.sock 00:06:12.225 02:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:12.225 02:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786337 ']' 00:06:12.225 02:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.225 02:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.225 02:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.225 02:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.225 02:27:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.225 [2024-12-16 02:27:42.830110] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:12.225 [2024-12-16 02:27:42.830155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786337 ] 00:06:12.483 [2024-12-16 02:27:42.905837] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.483 [2024-12-16 02:27:42.905870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.483 [2024-12-16 02:27:42.931024] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.483 [2024-12-16 02:27:42.931132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.483 [2024-12-16 02:27:42.931133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.483 02:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.483 02:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:12.483 02:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=786545 00:06:12.483 02:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 786545 /var/tmp/spdk2.sock 00:06:12.483 02:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:12.483 02:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786545 ']' 00:06:12.483 02:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.483 02:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.483 02:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.484 02:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.484 02:27:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.742 [2024-12-16 02:27:43.180455] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:12.742 [2024-12-16 02:27:43.180507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786545 ] 00:06:12.742 [2024-12-16 02:27:43.272227] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.742 [2024-12-16 02:27:43.272256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.742 [2024-12-16 02:27:43.321007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.742 [2024-12-16 02:27:43.321127] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.742 [2024-12-16 02:27:43.321128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.675 [2024-12-16 02:27:44.035917] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 786337 has claimed it. 00:06:13.675 request: 00:06:13.675 { 00:06:13.675 "method": "framework_enable_cpumask_locks", 00:06:13.675 "req_id": 1 00:06:13.675 } 00:06:13.675 Got JSON-RPC error response 00:06:13.675 response: 00:06:13.675 { 00:06:13.675 "code": -32603, 00:06:13.675 "message": "Failed to claim CPU core: 2" 00:06:13.675 } 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.675 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 786337 /var/tmp/spdk.sock 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786337 ']' 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 786545 /var/tmp/spdk2.sock 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 786545 ']' 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.676 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.934 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.934 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:13.934 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:13.934 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.934 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.934 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.934 00:06:13.934 real 0m1.702s 00:06:13.934 user 0m0.844s 00:06:13.934 sys 0m0.133s 00:06:13.934 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:13.934 02:27:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.934 ************************************ 00:06:13.934 END TEST locking_overlapped_coremask_via_rpc 00:06:13.934 ************************************ 00:06:13.934 02:27:44 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:13.934 02:27:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 786337 ]] 00:06:13.934 02:27:44 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 786337 00:06:13.934 02:27:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786337 ']' 00:06:13.934 02:27:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786337 00:06:13.934 02:27:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:13.934 02:27:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.934 02:27:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786337 00:06:13.934 02:27:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:13.934 02:27:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:13.934 02:27:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786337' 00:06:13.934 killing process with pid 786337 00:06:13.934 02:27:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 786337 00:06:13.934 02:27:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 786337 00:06:14.501 02:27:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 786545 ]] 00:06:14.501 02:27:44 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 786545 00:06:14.502 02:27:44 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786545 ']' 00:06:14.502 02:27:44 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786545 00:06:14.502 02:27:44 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:14.502 02:27:44 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.502 02:27:44 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 786545 00:06:14.502 02:27:44 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:14.502 02:27:44 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:14.502 02:27:44 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 786545' 00:06:14.502 killing process with pid 786545 00:06:14.502 02:27:44 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 786545 00:06:14.502 02:27:44 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 786545 00:06:14.761 02:27:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.761 02:27:45 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:14.761 02:27:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 786337 ]] 00:06:14.761 02:27:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 786337 00:06:14.761 02:27:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786337 ']' 00:06:14.761 02:27:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786337 00:06:14.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (786337) - No such process 00:06:14.761 02:27:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 786337 is not found' 00:06:14.761 Process with pid 786337 is not found 00:06:14.761 02:27:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 786545 ]] 00:06:14.761 02:27:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 786545 00:06:14.761 02:27:45 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 786545 ']' 00:06:14.761 02:27:45 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 786545 00:06:14.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (786545) - No such process 00:06:14.761 02:27:45 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 786545 is not found' 00:06:14.761 Process with pid 786545 is not found 00:06:14.761 02:27:45 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:14.761 00:06:14.761 real 0m13.621s 00:06:14.761 user 0m24.032s 00:06:14.761 sys 0m4.838s 00:06:14.761 02:27:45 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.761 02:27:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:14.761 ************************************ 00:06:14.761 END TEST cpu_locks 00:06:14.761 ************************************ 00:06:14.761 00:06:14.761 real 0m38.250s 00:06:14.761 user 1m13.414s 00:06:14.761 sys 0m8.354s 00:06:14.761 02:27:45 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.761 02:27:45 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.761 ************************************ 00:06:14.761 END TEST event 00:06:14.761 ************************************ 00:06:14.761 02:27:45 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:14.761 02:27:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.761 02:27:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.761 02:27:45 -- common/autotest_common.sh@10 -- # set +x 00:06:14.761 ************************************ 00:06:14.761 START TEST thread 00:06:14.761 ************************************ 00:06:14.761 02:27:45 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:14.761 * Looking for test storage... 00:06:15.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:15.020 02:27:45 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:15.020 02:27:45 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:15.020 02:27:45 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:15.020 02:27:45 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:15.020 02:27:45 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.020 02:27:45 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.020 02:27:45 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.020 02:27:45 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.020 02:27:45 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.020 02:27:45 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.020 02:27:45 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.020 02:27:45 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.020 02:27:45 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.020 02:27:45 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.020 02:27:45 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.020 02:27:45 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:15.020 02:27:45 thread -- scripts/common.sh@345 -- # : 1 00:06:15.020 02:27:45 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.020 02:27:45 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.020 02:27:45 thread -- scripts/common.sh@365 -- # decimal 1 00:06:15.020 02:27:45 thread -- scripts/common.sh@353 -- # local d=1 00:06:15.020 02:27:45 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.020 02:27:45 thread -- scripts/common.sh@355 -- # echo 1 00:06:15.020 02:27:45 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.020 02:27:45 thread -- scripts/common.sh@366 -- # decimal 2 00:06:15.020 02:27:45 thread -- scripts/common.sh@353 -- # local d=2 00:06:15.020 02:27:45 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.020 02:27:45 thread -- scripts/common.sh@355 -- # echo 2 00:06:15.020 02:27:45 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.020 02:27:45 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.020 02:27:45 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.020 02:27:45 thread -- scripts/common.sh@368 -- # return 0 00:06:15.020 02:27:45 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.020 02:27:45 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.020 --rc genhtml_branch_coverage=1 00:06:15.020 --rc genhtml_function_coverage=1 00:06:15.020 --rc genhtml_legend=1 00:06:15.020 --rc geninfo_all_blocks=1 00:06:15.020 --rc geninfo_unexecuted_blocks=1 00:06:15.020 00:06:15.020 ' 00:06:15.020 02:27:45 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.020 --rc genhtml_branch_coverage=1 00:06:15.020 --rc genhtml_function_coverage=1 00:06:15.020 --rc genhtml_legend=1 00:06:15.020 --rc geninfo_all_blocks=1 00:06:15.020 --rc geninfo_unexecuted_blocks=1 00:06:15.020 00:06:15.020 ' 00:06:15.020 02:27:45 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.020 --rc genhtml_branch_coverage=1 00:06:15.020 --rc genhtml_function_coverage=1 00:06:15.020 --rc genhtml_legend=1 00:06:15.020 --rc geninfo_all_blocks=1 00:06:15.020 --rc geninfo_unexecuted_blocks=1 00:06:15.020 00:06:15.020 ' 00:06:15.020 02:27:45 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:15.020 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.020 --rc genhtml_branch_coverage=1 00:06:15.020 --rc genhtml_function_coverage=1 00:06:15.020 --rc genhtml_legend=1 00:06:15.020 --rc geninfo_all_blocks=1 00:06:15.020 --rc geninfo_unexecuted_blocks=1 00:06:15.020 00:06:15.020 ' 00:06:15.020 02:27:45 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.020 02:27:45 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:15.020 02:27:45 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.020 02:27:45 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.020 ************************************ 00:06:15.020 START TEST thread_poller_perf 00:06:15.020 ************************************ 00:06:15.020 02:27:45 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.020 [2024-12-16 02:27:45.568379] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:15.020 [2024-12-16 02:27:45.568447] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786927 ] 00:06:15.020 [2024-12-16 02:27:45.644983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.020 [2024-12-16 02:27:45.667380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.020 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.397 [2024-12-16T01:27:47.057Z] ====================================== 00:06:16.398 [2024-12-16T01:27:47.057Z] busy:2107709380 (cyc) 00:06:16.398 [2024-12-16T01:27:47.057Z] total_run_count: 423000 00:06:16.398 [2024-12-16T01:27:47.057Z] tsc_hz: 2100000000 (cyc) 00:06:16.398 [2024-12-16T01:27:47.057Z] ====================================== 00:06:16.398 [2024-12-16T01:27:47.057Z] poller_cost: 4982 (cyc), 2372 (nsec) 00:06:16.398 00:06:16.398 real 0m1.160s 00:06:16.398 user 0m1.075s 00:06:16.398 sys 0m0.080s 00:06:16.398 02:27:46 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.398 02:27:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.398 ************************************ 00:06:16.398 END TEST thread_poller_perf 00:06:16.398 ************************************ 00:06:16.398 02:27:46 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.398 02:27:46 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:16.398 02:27:46 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.398 02:27:46 thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.398 ************************************ 00:06:16.398 START TEST thread_poller_perf 00:06:16.398 ************************************ 00:06:16.398 02:27:46 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:16.398 [2024-12-16 02:27:46.799783] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:16.398 [2024-12-16 02:27:46.800007] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787135 ] 00:06:16.398 [2024-12-16 02:27:46.877952] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.398 [2024-12-16 02:27:46.900627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.398 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:17.334 [2024-12-16T01:27:47.993Z] ====================================== 00:06:17.334 [2024-12-16T01:27:47.993Z] busy:2101380214 (cyc) 00:06:17.334 [2024-12-16T01:27:47.994Z] total_run_count: 5138000 00:06:17.335 [2024-12-16T01:27:47.994Z] tsc_hz: 2100000000 (cyc) 00:06:17.335 [2024-12-16T01:27:47.994Z] ====================================== 00:06:17.335 [2024-12-16T01:27:47.994Z] poller_cost: 408 (cyc), 194 (nsec) 00:06:17.335 00:06:17.335 real 0m1.153s 00:06:17.335 user 0m1.081s 00:06:17.335 sys 0m0.067s 00:06:17.335 02:27:47 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.335 02:27:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.335 ************************************ 00:06:17.335 END TEST thread_poller_perf 00:06:17.335 ************************************ 00:06:17.335 02:27:47 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:17.335 00:06:17.335 real 0m2.634s 00:06:17.335 user 0m2.314s 00:06:17.335 sys 0m0.334s 00:06:17.335 02:27:47 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.335 02:27:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.335 ************************************ 00:06:17.335 END TEST thread 00:06:17.335 ************************************ 00:06:17.594 02:27:48 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:17.594 02:27:48 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:17.594 02:27:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.594 02:27:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.594 02:27:48 -- common/autotest_common.sh@10 -- # set +x 00:06:17.594 ************************************ 00:06:17.594 START TEST app_cmdline 00:06:17.594 ************************************ 00:06:17.594 02:27:48 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:17.594 * Looking for test storage... 00:06:17.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:17.594 02:27:48 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:17.594 02:27:48 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:17.594 02:27:48 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:17.594 02:27:48 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.594 02:27:48 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:17.594 02:27:48 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.594 02:27:48 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:17.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.594 --rc genhtml_branch_coverage=1 00:06:17.594 --rc genhtml_function_coverage=1 00:06:17.594 --rc genhtml_legend=1 00:06:17.594 --rc geninfo_all_blocks=1 00:06:17.594 --rc geninfo_unexecuted_blocks=1 00:06:17.594 00:06:17.594 ' 00:06:17.594 02:27:48 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:17.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.594 --rc genhtml_branch_coverage=1 00:06:17.594 --rc genhtml_function_coverage=1 00:06:17.594 --rc genhtml_legend=1 00:06:17.594 --rc geninfo_all_blocks=1 00:06:17.594 --rc geninfo_unexecuted_blocks=1 00:06:17.594 00:06:17.594 ' 00:06:17.594 02:27:48 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:17.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.594 --rc genhtml_branch_coverage=1 00:06:17.594 --rc genhtml_function_coverage=1 00:06:17.594 --rc genhtml_legend=1 00:06:17.594 --rc geninfo_all_blocks=1 00:06:17.595 --rc geninfo_unexecuted_blocks=1 00:06:17.595 00:06:17.595 ' 00:06:17.595 02:27:48 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:17.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.595 --rc genhtml_branch_coverage=1 00:06:17.595 --rc genhtml_function_coverage=1 00:06:17.595 --rc genhtml_legend=1 00:06:17.595 --rc geninfo_all_blocks=1 00:06:17.595 --rc geninfo_unexecuted_blocks=1 00:06:17.595 00:06:17.595 ' 00:06:17.595 02:27:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:17.595 02:27:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=787429 00:06:17.595 02:27:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 787429 00:06:17.595 02:27:48 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:17.595 02:27:48 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 787429 ']' 00:06:17.595 02:27:48 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.595 02:27:48 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.595 02:27:48 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.595 02:27:48 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.595 02:27:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:17.854 [2024-12-16 02:27:48.266803] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:17.854 [2024-12-16 02:27:48.266856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787429 ] 00:06:17.854 [2024-12-16 02:27:48.341465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.854 [2024-12-16 02:27:48.364018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.113 02:27:48 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.113 02:27:48 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:18.113 02:27:48 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:18.113 { 00:06:18.113 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:18.113 "fields": { 00:06:18.113 "major": 25, 00:06:18.113 "minor": 1, 00:06:18.113 "patch": 0, 00:06:18.113 "suffix": "-pre", 00:06:18.113 "commit": "e01cb43b8" 00:06:18.113 } 00:06:18.113 } 00:06:18.113 02:27:48 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:18.113 02:27:48 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:18.113 02:27:48 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:18.113 02:27:48 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:18.113 02:27:48 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:18.113 02:27:48 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:18.113 02:27:48 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:18.113 02:27:48 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:18.113 02:27:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.113 02:27:48 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:18.371 02:27:48 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:18.371 02:27:48 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:18.371 02:27:48 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:18.371 02:27:48 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:18.371 02:27:48 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:18.371 02:27:48 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.371 02:27:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.371 02:27:48 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.371 02:27:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.371 02:27:48 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.371 02:27:48 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.371 02:27:48 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:18.371 02:27:48 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:18.371 02:27:48 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:18.371 request: 00:06:18.371 { 00:06:18.371 "method": "env_dpdk_get_mem_stats", 00:06:18.371 "req_id": 1 00:06:18.371 } 00:06:18.371 Got JSON-RPC error response 00:06:18.371 response: 00:06:18.371 { 00:06:18.371 "code": -32601, 00:06:18.371 "message": "Method not found" 00:06:18.371 } 00:06:18.371 02:27:49 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:18.371 02:27:49 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.371 02:27:49 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.371 02:27:49 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.371 02:27:49 app_cmdline -- app/cmdline.sh@1 -- # killprocess 787429 00:06:18.371 02:27:49 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 787429 ']' 00:06:18.371 02:27:49 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 787429 00:06:18.371 02:27:49 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:18.630 02:27:49 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:18.630 02:27:49 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 787429 00:06:18.630 02:27:49 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:18.630 02:27:49 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:18.630 02:27:49 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 787429' 00:06:18.630 killing process with pid 787429 00:06:18.630 02:27:49 app_cmdline -- common/autotest_common.sh@973 -- # kill 787429 00:06:18.630 02:27:49 app_cmdline -- common/autotest_common.sh@978 -- # wait 787429 00:06:18.889 00:06:18.889 real 0m1.329s 00:06:18.889 user 0m1.570s 00:06:18.889 sys 0m0.455s 00:06:18.889 02:27:49 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.889 02:27:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:18.889 ************************************ 00:06:18.889 END TEST app_cmdline 00:06:18.889 ************************************ 00:06:18.889 02:27:49 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:18.889 02:27:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.889 02:27:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.889 02:27:49 -- common/autotest_common.sh@10 -- # set +x 00:06:18.889 ************************************ 00:06:18.889 START TEST version 00:06:18.889 ************************************ 00:06:18.889 02:27:49 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:18.889 * Looking for test storage... 00:06:18.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:18.889 02:27:49 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:18.889 02:27:49 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:18.889 02:27:49 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.149 02:27:49 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.149 02:27:49 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.149 02:27:49 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.149 02:27:49 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.149 02:27:49 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.149 02:27:49 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.149 02:27:49 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.149 02:27:49 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.149 02:27:49 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.149 02:27:49 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.149 02:27:49 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.149 02:27:49 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.149 02:27:49 version -- scripts/common.sh@344 -- # case "$op" in 00:06:19.149 02:27:49 version -- scripts/common.sh@345 -- # : 1 00:06:19.149 02:27:49 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.149 02:27:49 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.149 02:27:49 version -- scripts/common.sh@365 -- # decimal 1 00:06:19.149 02:27:49 version -- scripts/common.sh@353 -- # local d=1 00:06:19.149 02:27:49 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.149 02:27:49 version -- scripts/common.sh@355 -- # echo 1 00:06:19.149 02:27:49 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.149 02:27:49 version -- scripts/common.sh@366 -- # decimal 2 00:06:19.149 02:27:49 version -- scripts/common.sh@353 -- # local d=2 00:06:19.149 02:27:49 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.149 02:27:49 version -- scripts/common.sh@355 -- # echo 2 00:06:19.149 02:27:49 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.149 02:27:49 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.149 02:27:49 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.149 02:27:49 version -- scripts/common.sh@368 -- # return 0 00:06:19.149 02:27:49 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.149 02:27:49 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.149 --rc genhtml_branch_coverage=1 00:06:19.149 --rc genhtml_function_coverage=1 00:06:19.149 --rc genhtml_legend=1 00:06:19.149 --rc geninfo_all_blocks=1 00:06:19.149 --rc geninfo_unexecuted_blocks=1 00:06:19.149 00:06:19.149 ' 00:06:19.149 02:27:49 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.149 --rc genhtml_branch_coverage=1 00:06:19.149 --rc genhtml_function_coverage=1 00:06:19.149 --rc genhtml_legend=1 00:06:19.149 --rc geninfo_all_blocks=1 00:06:19.149 --rc geninfo_unexecuted_blocks=1 00:06:19.149 00:06:19.149 ' 00:06:19.149 02:27:49 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.149 --rc genhtml_branch_coverage=1 00:06:19.149 --rc genhtml_function_coverage=1 00:06:19.149 --rc genhtml_legend=1 00:06:19.149 --rc geninfo_all_blocks=1 00:06:19.149 --rc geninfo_unexecuted_blocks=1 00:06:19.149 00:06:19.149 ' 00:06:19.149 02:27:49 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.149 --rc genhtml_branch_coverage=1 00:06:19.149 --rc genhtml_function_coverage=1 00:06:19.149 --rc genhtml_legend=1 00:06:19.149 --rc geninfo_all_blocks=1 00:06:19.149 --rc geninfo_unexecuted_blocks=1 00:06:19.149 00:06:19.149 ' 00:06:19.149 02:27:49 version -- app/version.sh@17 -- # get_header_version major 00:06:19.149 02:27:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.149 02:27:49 version -- app/version.sh@14 -- # cut -f2 00:06:19.149 02:27:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.149 02:27:49 version -- app/version.sh@17 -- # major=25 00:06:19.149 02:27:49 version -- app/version.sh@18 -- # get_header_version minor 00:06:19.149 02:27:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.149 02:27:49 version -- app/version.sh@14 -- # cut -f2 00:06:19.149 02:27:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.149 02:27:49 version -- app/version.sh@18 -- # minor=1 00:06:19.149 02:27:49 version -- app/version.sh@19 -- # get_header_version patch 00:06:19.149 02:27:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.149 02:27:49 version -- app/version.sh@14 -- # cut -f2 00:06:19.149 02:27:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.149 02:27:49 version -- app/version.sh@19 -- # patch=0 00:06:19.149 02:27:49 version -- app/version.sh@20 -- # get_header_version suffix 00:06:19.149 02:27:49 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:19.149 02:27:49 version -- app/version.sh@14 -- # cut -f2 00:06:19.149 02:27:49 version -- app/version.sh@14 -- # tr -d '"' 00:06:19.149 02:27:49 version -- app/version.sh@20 -- # suffix=-pre 00:06:19.149 02:27:49 version -- app/version.sh@22 -- # version=25.1 00:06:19.149 02:27:49 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:19.149 02:27:49 version -- app/version.sh@28 -- # version=25.1rc0 00:06:19.149 02:27:49 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:19.150 02:27:49 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:19.150 02:27:49 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:19.150 02:27:49 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:19.150 00:06:19.150 real 0m0.250s 00:06:19.150 user 0m0.165s 00:06:19.150 sys 0m0.128s 00:06:19.150 02:27:49 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.150 02:27:49 version -- common/autotest_common.sh@10 -- # set +x 00:06:19.150 ************************************ 00:06:19.150 END TEST version 00:06:19.150 ************************************ 00:06:19.150 02:27:49 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:19.150 02:27:49 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:19.150 02:27:49 -- spdk/autotest.sh@194 -- # uname -s 00:06:19.150 02:27:49 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:19.150 02:27:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:19.150 02:27:49 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:19.150 02:27:49 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:19.150 02:27:49 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:19.150 02:27:49 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:19.150 02:27:49 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:19.150 02:27:49 -- common/autotest_common.sh@10 -- # set +x 00:06:19.150 02:27:49 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:19.150 02:27:49 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:19.150 02:27:49 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:19.150 02:27:49 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:19.150 02:27:49 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:19.150 02:27:49 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:19.150 02:27:49 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:19.150 02:27:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.150 02:27:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.150 02:27:49 -- common/autotest_common.sh@10 -- # set +x 00:06:19.150 ************************************ 00:06:19.150 START TEST nvmf_tcp 00:06:19.150 ************************************ 00:06:19.150 02:27:49 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:19.408 * Looking for test storage... 00:06:19.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:19.408 02:27:49 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.408 02:27:49 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.408 02:27:49 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.408 02:27:49 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.408 02:27:49 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.408 02:27:49 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.408 02:27:49 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.408 02:27:49 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.408 02:27:49 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.408 02:27:49 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.408 02:27:49 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.408 02:27:49 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.409 02:27:49 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:19.409 02:27:49 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.409 02:27:49 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.409 --rc genhtml_branch_coverage=1 00:06:19.409 --rc genhtml_function_coverage=1 00:06:19.409 --rc genhtml_legend=1 00:06:19.409 --rc geninfo_all_blocks=1 00:06:19.409 --rc geninfo_unexecuted_blocks=1 00:06:19.409 00:06:19.409 ' 00:06:19.409 02:27:49 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.409 --rc genhtml_branch_coverage=1 00:06:19.409 --rc genhtml_function_coverage=1 00:06:19.409 --rc genhtml_legend=1 00:06:19.409 --rc geninfo_all_blocks=1 00:06:19.409 --rc geninfo_unexecuted_blocks=1 00:06:19.409 00:06:19.409 ' 00:06:19.409 02:27:49 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.409 --rc genhtml_branch_coverage=1 00:06:19.409 --rc genhtml_function_coverage=1 00:06:19.409 --rc genhtml_legend=1 00:06:19.409 --rc geninfo_all_blocks=1 00:06:19.409 --rc geninfo_unexecuted_blocks=1 00:06:19.409 00:06:19.409 ' 00:06:19.409 02:27:49 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.409 --rc genhtml_branch_coverage=1 00:06:19.409 --rc genhtml_function_coverage=1 00:06:19.409 --rc genhtml_legend=1 00:06:19.409 --rc geninfo_all_blocks=1 00:06:19.409 --rc geninfo_unexecuted_blocks=1 00:06:19.409 00:06:19.409 ' 00:06:19.409 02:27:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:19.409 02:27:49 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:19.409 02:27:49 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:19.409 02:27:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.409 02:27:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.409 02:27:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:19.409 ************************************ 00:06:19.409 START TEST nvmf_target_core 00:06:19.409 ************************************ 00:06:19.409 02:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:19.668 * Looking for test storage... 00:06:19.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.668 --rc genhtml_branch_coverage=1 00:06:19.668 --rc genhtml_function_coverage=1 00:06:19.668 --rc genhtml_legend=1 00:06:19.668 --rc geninfo_all_blocks=1 00:06:19.668 --rc geninfo_unexecuted_blocks=1 00:06:19.668 00:06:19.668 ' 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.668 --rc genhtml_branch_coverage=1 00:06:19.668 --rc genhtml_function_coverage=1 00:06:19.668 --rc genhtml_legend=1 00:06:19.668 --rc geninfo_all_blocks=1 00:06:19.668 --rc geninfo_unexecuted_blocks=1 00:06:19.668 00:06:19.668 ' 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.668 --rc genhtml_branch_coverage=1 00:06:19.668 --rc genhtml_function_coverage=1 00:06:19.668 --rc genhtml_legend=1 00:06:19.668 --rc geninfo_all_blocks=1 00:06:19.668 --rc geninfo_unexecuted_blocks=1 00:06:19.668 00:06:19.668 ' 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.668 --rc genhtml_branch_coverage=1 00:06:19.668 --rc genhtml_function_coverage=1 00:06:19.668 --rc genhtml_legend=1 00:06:19.668 --rc geninfo_all_blocks=1 00:06:19.668 --rc geninfo_unexecuted_blocks=1 00:06:19.668 00:06:19.668 ' 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.668 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.668 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.669 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:19.669 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:19.669 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:19.669 02:27:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:19.669 02:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:19.669 02:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.669 02:27:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:19.669 ************************************ 00:06:19.669 START TEST nvmf_abort 00:06:19.669 ************************************ 00:06:19.669 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:19.928 * Looking for test storage... 00:06:19.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:19.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.928 --rc genhtml_branch_coverage=1 00:06:19.928 --rc genhtml_function_coverage=1 00:06:19.928 --rc genhtml_legend=1 00:06:19.928 --rc geninfo_all_blocks=1 00:06:19.928 --rc geninfo_unexecuted_blocks=1 00:06:19.928 00:06:19.928 ' 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:19.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.928 --rc genhtml_branch_coverage=1 00:06:19.928 --rc genhtml_function_coverage=1 00:06:19.928 --rc genhtml_legend=1 00:06:19.928 --rc geninfo_all_blocks=1 00:06:19.928 --rc geninfo_unexecuted_blocks=1 00:06:19.928 00:06:19.928 ' 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:19.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.928 --rc genhtml_branch_coverage=1 00:06:19.928 --rc genhtml_function_coverage=1 00:06:19.928 --rc genhtml_legend=1 00:06:19.928 --rc geninfo_all_blocks=1 00:06:19.928 --rc geninfo_unexecuted_blocks=1 00:06:19.928 00:06:19.928 ' 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:19.928 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.928 --rc genhtml_branch_coverage=1 00:06:19.928 --rc genhtml_function_coverage=1 00:06:19.928 --rc genhtml_legend=1 00:06:19.928 --rc geninfo_all_blocks=1 00:06:19.928 --rc geninfo_unexecuted_blocks=1 00:06:19.928 00:06:19.928 ' 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:19.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:19.928 02:27:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:26.498 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:26.498 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:26.498 Found net devices under 0000:af:00.0: cvl_0_0 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:26.498 Found net devices under 0000:af:00.1: cvl_0_1 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:26.498 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:26.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:26.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:06:26.499 00:06:26.499 --- 10.0.0.2 ping statistics --- 00:06:26.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.499 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:26.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:26.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:06:26.499 00:06:26.499 --- 10.0.0.1 ping statistics --- 00:06:26.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:26.499 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=791042 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 791042 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 791042 ']' 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.499 [2024-12-16 02:27:56.486439] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:26.499 [2024-12-16 02:27:56.486484] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:26.499 [2024-12-16 02:27:56.566446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.499 [2024-12-16 02:27:56.589693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:26.499 [2024-12-16 02:27:56.589732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:26.499 [2024-12-16 02:27:56.589739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:26.499 [2024-12-16 02:27:56.589744] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:26.499 [2024-12-16 02:27:56.589750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:26.499 [2024-12-16 02:27:56.591095] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.499 [2024-12-16 02:27:56.591204] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.499 [2024-12-16 02:27:56.591206] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.499 [2024-12-16 02:27:56.734736] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.499 Malloc0 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.499 Delay0 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.499 [2024-12-16 02:27:56.810472] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.499 02:27:56 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:26.499 [2024-12-16 02:27:56.986028] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:28.401 Initializing NVMe Controllers 00:06:28.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:28.401 controller IO queue size 128 less than required 00:06:28.401 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:28.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:28.401 Initialization complete. Launching workers. 00:06:28.401 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 37065 00:06:28.401 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37130, failed to submit 62 00:06:28.401 success 37069, unsuccessful 61, failed 0 00:06:28.401 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:28.401 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:28.401 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:28.401 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:28.401 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:28.401 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:28.401 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:28.401 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:28.401 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:28.401 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:28.401 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:28.401 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:28.401 rmmod nvme_tcp 00:06:28.401 rmmod nvme_fabrics 00:06:28.659 rmmod nvme_keyring 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 791042 ']' 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 791042 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 791042 ']' 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 791042 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 791042 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 791042' 00:06:28.659 killing process with pid 791042 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 791042 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 791042 00:06:28.659 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:28.918 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:28.918 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:28.918 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:28.918 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:28.918 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:28.918 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:28.918 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:28.918 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:28.918 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.918 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.918 02:27:59 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:30.825 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:30.825 00:06:30.825 real 0m11.119s 00:06:30.825 user 0m11.528s 00:06:30.825 sys 0m5.452s 00:06:30.825 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.825 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:30.825 ************************************ 00:06:30.825 END TEST nvmf_abort 00:06:30.825 ************************************ 00:06:30.825 02:28:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:30.825 02:28:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:30.825 02:28:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.825 02:28:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:30.825 ************************************ 00:06:30.825 START TEST nvmf_ns_hotplug_stress 00:06:30.825 ************************************ 00:06:30.825 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:31.085 * Looking for test storage... 00:06:31.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:31.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.085 --rc genhtml_branch_coverage=1 00:06:31.085 --rc genhtml_function_coverage=1 00:06:31.085 --rc genhtml_legend=1 00:06:31.085 --rc geninfo_all_blocks=1 00:06:31.085 --rc geninfo_unexecuted_blocks=1 00:06:31.085 00:06:31.085 ' 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:31.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.085 --rc genhtml_branch_coverage=1 00:06:31.085 --rc genhtml_function_coverage=1 00:06:31.085 --rc genhtml_legend=1 00:06:31.085 --rc geninfo_all_blocks=1 00:06:31.085 --rc geninfo_unexecuted_blocks=1 00:06:31.085 00:06:31.085 ' 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:31.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.085 --rc genhtml_branch_coverage=1 00:06:31.085 --rc genhtml_function_coverage=1 00:06:31.085 --rc genhtml_legend=1 00:06:31.085 --rc geninfo_all_blocks=1 00:06:31.085 --rc geninfo_unexecuted_blocks=1 00:06:31.085 00:06:31.085 ' 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:31.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.085 --rc genhtml_branch_coverage=1 00:06:31.085 --rc genhtml_function_coverage=1 00:06:31.085 --rc genhtml_legend=1 00:06:31.085 --rc geninfo_all_blocks=1 00:06:31.085 --rc geninfo_unexecuted_blocks=1 00:06:31.085 00:06:31.085 ' 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.085 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:31.086 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:31.086 02:28:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:37.659 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:37.660 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:37.660 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:37.660 Found net devices under 0000:af:00.0: cvl_0_0 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:37.660 Found net devices under 0000:af:00.1: cvl_0_1 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:37.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:37.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:06:37.660 00:06:37.660 --- 10.0.0.2 ping statistics --- 00:06:37.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.660 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:37.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:37.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:06:37.660 00:06:37.660 --- 10.0.0.1 ping statistics --- 00:06:37.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:37.660 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=795085 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 795085 00:06:37.660 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 795085 ']' 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.661 [2024-12-16 02:28:07.755873] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:37.661 [2024-12-16 02:28:07.755921] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:37.661 [2024-12-16 02:28:07.833622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.661 [2024-12-16 02:28:07.855644] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:37.661 [2024-12-16 02:28:07.855683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:37.661 [2024-12-16 02:28:07.855690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:37.661 [2024-12-16 02:28:07.855696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:37.661 [2024-12-16 02:28:07.855701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:37.661 [2024-12-16 02:28:07.857027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.661 [2024-12-16 02:28:07.857146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.661 [2024-12-16 02:28:07.857147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:37.661 02:28:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:37.661 [2024-12-16 02:28:08.149452] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:37.661 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:37.919 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:37.919 [2024-12-16 02:28:08.538827] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:37.919 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:38.177 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:38.435 Malloc0 00:06:38.435 02:28:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:38.693 Delay0 00:06:38.693 02:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.951 02:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:38.951 NULL1 00:06:38.951 02:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:39.209 02:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:39.209 02:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=795463 00:06:39.209 02:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:39.209 02:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.467 02:28:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.725 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:39.725 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:39.725 true 00:06:39.983 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:39.983 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.983 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.241 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:40.241 02:28:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:40.498 true 00:06:40.498 02:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:40.498 02:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.756 02:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.756 02:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:41.014 02:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:41.014 true 00:06:41.014 02:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:41.014 02:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.271 02:28:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.529 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:41.529 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:41.787 true 00:06:41.787 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:41.787 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.045 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.045 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:42.045 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:42.303 true 00:06:42.303 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:42.303 02:28:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.561 02:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.818 02:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:42.818 02:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:42.818 true 00:06:43.076 02:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:43.076 02:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.076 02:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.334 02:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:43.334 02:28:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:43.593 true 00:06:43.593 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:43.593 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.851 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.851 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:43.851 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:44.109 true 00:06:44.109 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:44.109 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.367 02:28:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.625 02:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:44.625 02:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:44.625 true 00:06:44.883 02:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:44.883 02:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.883 02:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.141 02:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:45.141 02:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:45.399 true 00:06:45.399 02:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:45.399 02:28:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.657 02:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.915 02:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:45.915 02:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:45.915 true 00:06:45.915 02:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:45.915 02:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.173 02:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.432 02:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:46.432 02:28:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:46.432 true 00:06:46.689 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:46.689 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.689 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.947 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:46.947 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:47.206 true 00:06:47.206 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:47.206 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.464 02:28:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.722 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:47.722 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:47.722 true 00:06:47.722 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:47.722 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.980 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.238 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:48.238 02:28:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:48.496 true 00:06:48.496 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:48.496 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.754 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.012 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:49.013 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:49.013 true 00:06:49.013 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:49.013 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.270 02:28:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.528 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:49.528 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:49.787 true 00:06:49.787 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:49.787 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.045 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.303 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:50.303 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:50.303 true 00:06:50.303 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:50.303 02:28:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.562 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.820 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:50.820 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:51.078 true 00:06:51.078 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:51.079 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.337 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.595 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:51.595 02:28:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:51.595 true 00:06:51.595 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:51.595 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.854 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.112 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:52.112 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:52.370 true 00:06:52.370 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:52.370 02:28:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.628 02:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:52.628 02:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:52.628 02:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:52.886 true 00:06:52.886 02:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:52.886 02:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.144 02:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.403 02:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:53.403 02:28:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:53.661 true 00:06:53.661 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:53.661 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.920 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.920 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:53.920 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:54.179 true 00:06:54.179 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:54.179 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.438 02:28:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.694 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:54.694 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:54.952 true 00:06:54.952 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:54.952 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.952 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.210 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:55.210 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:55.468 true 00:06:55.468 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:55.468 02:28:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.727 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.984 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:55.984 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:55.984 true 00:06:55.984 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:55.984 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.242 02:28:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.500 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:56.500 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:56.758 true 00:06:56.758 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:56.758 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.015 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.015 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:57.015 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:57.274 true 00:06:57.274 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:57.274 02:28:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.532 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.791 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:57.791 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:58.048 true 00:06:58.048 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:58.048 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.048 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.307 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:58.307 02:28:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:58.565 true 00:06:58.565 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:58.565 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.824 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.083 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:59.083 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:59.083 true 00:06:59.341 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:59.341 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.341 02:28:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.599 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:59.599 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:59.857 true 00:06:59.857 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:06:59.857 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.116 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.374 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:00.374 02:28:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:00.374 true 00:07:00.632 02:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:00.632 02:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.632 02:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.891 02:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:00.891 02:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:01.149 true 00:07:01.149 02:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:01.149 02:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.407 02:28:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.665 02:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:01.665 02:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:01.665 true 00:07:01.665 02:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:01.665 02:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.923 02:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.182 02:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:02.182 02:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:02.440 true 00:07:02.440 02:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:02.440 02:28:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.698 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.698 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:02.957 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:02.957 true 00:07:02.957 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:02.957 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.215 02:28:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.474 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:03.474 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:03.732 true 00:07:03.732 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:03.732 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.991 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.991 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:03.991 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:04.249 true 00:07:04.249 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:04.249 02:28:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.507 02:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.765 02:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:04.765 02:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:04.765 true 00:07:04.765 02:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:04.765 02:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.022 02:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.281 02:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:05.281 02:28:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:05.539 true 00:07:05.539 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:05.539 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.797 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.797 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:05.797 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:06.055 true 00:07:06.055 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:06.055 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.314 02:28:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.572 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:06.572 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:06.572 true 00:07:06.831 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:06.831 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.831 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.089 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:07.089 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:07.347 true 00:07:07.347 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:07.347 02:28:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.605 02:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.863 02:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:07.863 02:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:07.863 true 00:07:07.863 02:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:07.863 02:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.121 02:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.379 02:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:08.379 02:28:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:08.638 true 00:07:08.638 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:08.638 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.896 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.154 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:09.154 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:09.154 true 00:07:09.154 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:09.154 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.413 02:28:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.413 Initializing NVMe Controllers 00:07:09.413 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:09.413 Controller SPDK bdev Controller (SPDK00000000000001 ): Skipping inactive NS 1 00:07:09.413 Controller IO queue size 128, less than required. 00:07:09.413 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.413 WARNING: Some requested NVMe devices were skipped 00:07:09.413 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:09.413 Initialization complete. Launching workers. 00:07:09.413 ======================================================== 00:07:09.413 Latency(us) 00:07:09.413 Device Information : IOPS MiB/s Average min max 00:07:09.413 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27387.97 13.37 4673.40 1557.42 8617.86 00:07:09.413 ======================================================== 00:07:09.413 Total : 27387.97 13.37 4673.40 1557.42 8617.86 00:07:09.413 00:07:09.673 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:09.673 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:09.931 true 00:07:09.931 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795463 00:07:09.931 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (795463) - No such process 00:07:09.931 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 795463 00:07:09.931 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.190 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.190 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:10.190 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:10.190 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:10.190 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.190 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:10.449 null0 00:07:10.449 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.449 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.449 02:28:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:10.707 null1 00:07:10.707 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.707 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.707 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:10.707 null2 00:07:10.707 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.707 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.965 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:10.965 null3 00:07:10.965 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:10.965 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:10.965 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:11.223 null4 00:07:11.223 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.223 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.223 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:11.481 null5 00:07:11.481 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.481 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.481 02:28:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:11.740 null6 00:07:11.740 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.740 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.740 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:11.740 null7 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 800983 800984 800987 800988 800990 800992 800994 800996 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.741 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.000 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.000 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.000 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.000 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.000 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.000 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.000 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.000 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.259 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.518 02:28:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.518 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.518 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:12.518 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.518 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:12.518 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.518 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.518 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:12.793 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.107 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.107 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.107 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.108 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.407 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.407 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.407 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.407 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.407 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.407 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.407 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.407 02:28:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.407 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:13.740 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.740 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:13.740 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:13.740 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:13.740 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:13.740 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:13.741 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:13.741 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:13.999 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.999 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.999 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:13.999 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:13.999 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:13.999 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.000 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.258 02:28:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:14.517 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:14.517 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:14.517 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:14.517 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:14.517 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:14.517 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:14.517 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:14.517 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:14.776 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.035 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.294 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:15.295 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.295 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.295 02:28:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:15.554 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:15.813 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:15.813 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:15.813 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:15.813 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:15.813 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.813 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:15.813 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:15.813 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:16.072 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:16.072 rmmod nvme_tcp 00:07:16.073 rmmod nvme_fabrics 00:07:16.073 rmmod nvme_keyring 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 795085 ']' 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 795085 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 795085 ']' 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 795085 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 795085 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 795085' 00:07:16.073 killing process with pid 795085 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 795085 00:07:16.073 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 795085 00:07:16.332 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:16.332 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:16.332 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:16.332 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:16.332 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:16.332 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:16.332 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:16.332 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:16.332 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:16.332 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:16.332 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:16.332 02:28:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.239 02:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:18.499 00:07:18.499 real 0m47.434s 00:07:18.499 user 3m22.127s 00:07:18.499 sys 0m16.813s 00:07:18.499 02:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.499 02:28:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:18.499 ************************************ 00:07:18.499 END TEST nvmf_ns_hotplug_stress 00:07:18.499 ************************************ 00:07:18.499 02:28:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:18.499 02:28:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:18.499 02:28:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.499 02:28:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:18.499 ************************************ 00:07:18.499 START TEST nvmf_delete_subsystem 00:07:18.499 ************************************ 00:07:18.499 02:28:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:18.499 * Looking for test storage... 00:07:18.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:18.499 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:18.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.758 --rc genhtml_branch_coverage=1 00:07:18.758 --rc genhtml_function_coverage=1 00:07:18.758 --rc genhtml_legend=1 00:07:18.758 --rc geninfo_all_blocks=1 00:07:18.758 --rc geninfo_unexecuted_blocks=1 00:07:18.758 00:07:18.758 ' 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:18.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.758 --rc genhtml_branch_coverage=1 00:07:18.758 --rc genhtml_function_coverage=1 00:07:18.758 --rc genhtml_legend=1 00:07:18.758 --rc geninfo_all_blocks=1 00:07:18.758 --rc geninfo_unexecuted_blocks=1 00:07:18.758 00:07:18.758 ' 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:18.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.758 --rc genhtml_branch_coverage=1 00:07:18.758 --rc genhtml_function_coverage=1 00:07:18.758 --rc genhtml_legend=1 00:07:18.758 --rc geninfo_all_blocks=1 00:07:18.758 --rc geninfo_unexecuted_blocks=1 00:07:18.758 00:07:18.758 ' 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:18.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.758 --rc genhtml_branch_coverage=1 00:07:18.758 --rc genhtml_function_coverage=1 00:07:18.758 --rc genhtml_legend=1 00:07:18.758 --rc geninfo_all_blocks=1 00:07:18.758 --rc geninfo_unexecuted_blocks=1 00:07:18.758 00:07:18.758 ' 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:18.758 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:18.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:18.759 02:28:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:25.331 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:25.331 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:25.331 Found net devices under 0000:af:00.0: cvl_0_0 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:25.331 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:25.332 Found net devices under 0000:af:00.1: cvl_0_1 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.332 02:28:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:25.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:07:25.332 00:07:25.332 --- 10.0.0.2 ping statistics --- 00:07:25.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.332 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.332 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.332 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:07:25.332 00:07:25.332 --- 10.0.0.1 ping statistics --- 00:07:25.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.332 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=805341 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 805341 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 805341 ']' 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.332 [2024-12-16 02:28:55.325551] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:25.332 [2024-12-16 02:28:55.325600] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.332 [2024-12-16 02:28:55.404498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:25.332 [2024-12-16 02:28:55.426973] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:25.332 [2024-12-16 02:28:55.427008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:25.332 [2024-12-16 02:28:55.427016] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:25.332 [2024-12-16 02:28:55.427024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:25.332 [2024-12-16 02:28:55.427033] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:25.332 [2024-12-16 02:28:55.428150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.332 [2024-12-16 02:28:55.428150] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.332 [2024-12-16 02:28:55.571909] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.332 [2024-12-16 02:28:55.592117] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.332 NULL1 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.332 Delay0 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.332 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:25.333 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=805541 00:07:25.333 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:25.333 02:28:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:25.333 [2024-12-16 02:28:55.703913] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:27.235 02:28:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:27.235 02:28:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:27.235 02:28:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:27.235 Read completed with error (sct=0, sc=8) 00:07:27.235 Read completed with error (sct=0, sc=8) 00:07:27.235 starting I/O failed: -6 00:07:27.235 Read completed with error (sct=0, sc=8) 00:07:27.235 Read completed with error (sct=0, sc=8) 00:07:27.235 Read completed with error (sct=0, sc=8) 00:07:27.235 Read completed with error (sct=0, sc=8) 00:07:27.235 starting I/O failed: -6 00:07:27.235 Read completed with error (sct=0, sc=8) 00:07:27.235 Read completed with error (sct=0, sc=8) 00:07:27.235 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 [2024-12-16 02:28:57.874450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c077c0 is same with the state(6) to be set 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 [2024-12-16 02:28:57.875011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c07400 is same with the state(6) to be set 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.236 Write completed with error (sct=0, sc=8) 00:07:27.236 starting I/O failed: -6 00:07:27.236 Read completed with error (sct=0, sc=8) 00:07:27.237 Read completed with error (sct=0, sc=8) 00:07:27.237 starting I/O failed: -6 00:07:27.237 Write completed with error (sct=0, sc=8) 00:07:27.237 Read completed with error (sct=0, sc=8) 00:07:27.237 starting I/O failed: -6 00:07:27.237 Read completed with error (sct=0, sc=8) 00:07:27.237 Read completed with error (sct=0, sc=8) 00:07:27.237 starting I/O failed: -6 00:07:27.237 Write completed with error (sct=0, sc=8) 00:07:27.237 Read completed with error (sct=0, sc=8) 00:07:27.237 starting I/O failed: -6 00:07:27.237 Read completed with error (sct=0, sc=8) 00:07:27.237 Read completed with error (sct=0, sc=8) 00:07:27.237 starting I/O failed: -6 00:07:27.237 starting I/O failed: -6 00:07:27.237 starting I/O failed: -6 00:07:27.237 starting I/O failed: -6 00:07:27.237 starting I/O failed: -6 00:07:27.237 starting I/O failed: -6 00:07:27.237 starting I/O failed: -6 00:07:27.237 starting I/O failed: -6 00:07:27.237 starting I/O failed: -6 00:07:27.237 starting I/O failed: -6 00:07:27.237 starting I/O failed: -6 00:07:27.237 starting I/O failed: -6 00:07:27.237 starting I/O failed: -6 00:07:28.613 [2024-12-16 02:28:58.841623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c05190 is same with the state(6) to be set 00:07:28.613 Read completed with error (sct=0, sc=8) 00:07:28.613 Read completed with error (sct=0, sc=8) 00:07:28.613 Write completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 [2024-12-16 02:28:58.875388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f81f000d060 is same with the state(6) to be set 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 [2024-12-16 02:28:58.875540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f81f000d800 is same with the state(6) to be set 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 [2024-12-16 02:28:58.878178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c06f70 is same with the state(6) to be set 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Write completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 Read completed with error (sct=0, sc=8) 00:07:28.614 [2024-12-16 02:28:58.878800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c075e0 is same with the state(6) to be set 00:07:28.614 Initializing NVMe Controllers 00:07:28.614 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:28.614 Controller IO queue size 128, less than required. 00:07:28.614 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:28.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:28.614 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:28.614 Initialization complete. Launching workers. 00:07:28.614 ======================================================== 00:07:28.614 Latency(us) 00:07:28.614 Device Information : IOPS MiB/s Average min max 00:07:28.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 172.73 0.08 890246.82 576.45 1011679.02 00:07:28.614 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 172.73 0.08 938579.96 353.73 1011485.25 00:07:28.614 ======================================================== 00:07:28.614 Total : 345.46 0.17 914413.39 353.73 1011679.02 00:07:28.614 00:07:28.614 [2024-12-16 02:28:58.879153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c05190 (9): Bad file descriptor 00:07:28.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:28.614 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.614 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:28.614 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 805541 00:07:28.614 02:28:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 805541 00:07:28.873 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (805541) - No such process 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 805541 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 805541 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 805541 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.873 [2024-12-16 02:28:59.406000] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=806082 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806082 00:07:28.873 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:28.873 [2024-12-16 02:28:59.497094] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:29.438 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:29.438 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806082 00:07:29.438 02:28:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:30.004 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:30.004 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806082 00:07:30.004 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:30.570 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:30.570 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806082 00:07:30.570 02:29:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:30.829 02:29:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:30.829 02:29:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806082 00:07:30.829 02:29:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:31.395 02:29:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:31.395 02:29:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806082 00:07:31.395 02:29:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:31.961 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:31.961 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806082 00:07:31.961 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:32.219 Initializing NVMe Controllers 00:07:32.219 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:32.219 Controller IO queue size 128, less than required. 00:07:32.219 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:32.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:32.219 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:32.219 Initialization complete. Launching workers. 00:07:32.219 ======================================================== 00:07:32.219 Latency(us) 00:07:32.219 Device Information : IOPS MiB/s Average min max 00:07:32.219 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002002.25 1000147.00 1005643.66 00:07:32.220 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003736.05 1000134.53 1009501.98 00:07:32.220 ======================================================== 00:07:32.220 Total : 256.00 0.12 1002869.15 1000134.53 1009501.98 00:07:32.220 00:07:32.478 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:32.478 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806082 00:07:32.478 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (806082) - No such process 00:07:32.478 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 806082 00:07:32.478 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:32.478 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:32.478 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:32.478 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:32.478 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:32.478 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:32.478 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:32.478 02:29:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:32.478 rmmod nvme_tcp 00:07:32.478 rmmod nvme_fabrics 00:07:32.478 rmmod nvme_keyring 00:07:32.478 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:32.478 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:32.478 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:32.478 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 805341 ']' 00:07:32.478 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 805341 00:07:32.478 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 805341 ']' 00:07:32.478 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 805341 00:07:32.478 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:32.478 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.478 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 805341 00:07:32.479 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.479 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.479 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 805341' 00:07:32.479 killing process with pid 805341 00:07:32.479 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 805341 00:07:32.479 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 805341 00:07:32.738 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:32.738 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:32.738 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:32.738 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:32.738 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:32.738 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:32.738 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:32.738 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:32.738 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:32.738 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.738 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.738 02:29:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.273 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:35.273 00:07:35.273 real 0m16.331s 00:07:35.273 user 0m29.377s 00:07:35.273 sys 0m5.527s 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.274 ************************************ 00:07:35.274 END TEST nvmf_delete_subsystem 00:07:35.274 ************************************ 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.274 ************************************ 00:07:35.274 START TEST nvmf_host_management 00:07:35.274 ************************************ 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:35.274 * Looking for test storage... 00:07:35.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:35.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.274 --rc genhtml_branch_coverage=1 00:07:35.274 --rc genhtml_function_coverage=1 00:07:35.274 --rc genhtml_legend=1 00:07:35.274 --rc geninfo_all_blocks=1 00:07:35.274 --rc geninfo_unexecuted_blocks=1 00:07:35.274 00:07:35.274 ' 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:35.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.274 --rc genhtml_branch_coverage=1 00:07:35.274 --rc genhtml_function_coverage=1 00:07:35.274 --rc genhtml_legend=1 00:07:35.274 --rc geninfo_all_blocks=1 00:07:35.274 --rc geninfo_unexecuted_blocks=1 00:07:35.274 00:07:35.274 ' 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:35.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.274 --rc genhtml_branch_coverage=1 00:07:35.274 --rc genhtml_function_coverage=1 00:07:35.274 --rc genhtml_legend=1 00:07:35.274 --rc geninfo_all_blocks=1 00:07:35.274 --rc geninfo_unexecuted_blocks=1 00:07:35.274 00:07:35.274 ' 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:35.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:35.274 --rc genhtml_branch_coverage=1 00:07:35.274 --rc genhtml_function_coverage=1 00:07:35.274 --rc genhtml_legend=1 00:07:35.274 --rc geninfo_all_blocks=1 00:07:35.274 --rc geninfo_unexecuted_blocks=1 00:07:35.274 00:07:35.274 ' 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:35.274 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:35.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:35.275 02:29:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:41.846 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:41.846 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:41.846 Found net devices under 0000:af:00.0: cvl_0_0 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:41.846 Found net devices under 0000:af:00.1: cvl_0_1 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:41.846 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:41.847 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:41.847 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:07:41.847 00:07:41.847 --- 10.0.0.2 ping statistics --- 00:07:41.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.847 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:41.847 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:41.847 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:07:41.847 00:07:41.847 --- 10.0.0.1 ping statistics --- 00:07:41.847 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:41.847 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=810158 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 810158 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 810158 ']' 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.847 [2024-12-16 02:29:11.635775] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:41.847 [2024-12-16 02:29:11.635816] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:41.847 [2024-12-16 02:29:11.695621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.847 [2024-12-16 02:29:11.719281] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:41.847 [2024-12-16 02:29:11.719323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:41.847 [2024-12-16 02:29:11.719330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:41.847 [2024-12-16 02:29:11.719336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:41.847 [2024-12-16 02:29:11.719341] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:41.847 [2024-12-16 02:29:11.720633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.847 [2024-12-16 02:29:11.720741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.847 [2024-12-16 02:29:11.720828] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.847 [2024-12-16 02:29:11.720829] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.847 [2024-12-16 02:29:11.849171] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.847 Malloc0 00:07:41.847 [2024-12-16 02:29:11.925633] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=810369 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 810369 /var/tmp/bdevperf.sock 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 810369 ']' 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:41.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:41.847 { 00:07:41.847 "params": { 00:07:41.847 "name": "Nvme$subsystem", 00:07:41.847 "trtype": "$TEST_TRANSPORT", 00:07:41.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:41.847 "adrfam": "ipv4", 00:07:41.847 "trsvcid": "$NVMF_PORT", 00:07:41.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:41.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:41.847 "hdgst": ${hdgst:-false}, 00:07:41.847 "ddgst": ${ddgst:-false} 00:07:41.847 }, 00:07:41.847 "method": "bdev_nvme_attach_controller" 00:07:41.847 } 00:07:41.847 EOF 00:07:41.847 )") 00:07:41.847 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:41.848 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:41.848 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:41.848 02:29:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:41.848 "params": { 00:07:41.848 "name": "Nvme0", 00:07:41.848 "trtype": "tcp", 00:07:41.848 "traddr": "10.0.0.2", 00:07:41.848 "adrfam": "ipv4", 00:07:41.848 "trsvcid": "4420", 00:07:41.848 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:41.848 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:41.848 "hdgst": false, 00:07:41.848 "ddgst": false 00:07:41.848 }, 00:07:41.848 "method": "bdev_nvme_attach_controller" 00:07:41.848 }' 00:07:41.848 [2024-12-16 02:29:12.018320] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:41.848 [2024-12-16 02:29:12.018367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810369 ] 00:07:41.848 [2024-12-16 02:29:12.092832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.848 [2024-12-16 02:29:12.115037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.848 Running I/O for 10 seconds... 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:07:41.848 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:42.106 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:42.106 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:42.106 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:42.106 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:42.106 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.106 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.106 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.367 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=705 00:07:42.367 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 705 -ge 100 ']' 00:07:42.367 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:42.367 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:42.367 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:42.367 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:42.367 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.367 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.367 [2024-12-16 02:29:12.788797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.788834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.788857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.788865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.788874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.788881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.788889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.788896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.788904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.788911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.788919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.788925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.788933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.788940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.788955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.788962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.788970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.788976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.788984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.788991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.788999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.367 [2024-12-16 02:29:12.789275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.367 [2024-12-16 02:29:12.789282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:42.368 [2024-12-16 02:29:12.789790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.368 [2024-12-16 02:29:12.789797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11e4f50 is same with the state(6) to be set 00:07:42.368 [2024-12-16 02:29:12.790744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:42.368 task offset: 103808 on job bdev=Nvme0n1 fails 00:07:42.368 00:07:42.368 Latency(us) 00:07:42.368 [2024-12-16T01:29:13.027Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:42.368 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:42.368 Job: Nvme0n1 ended in about 0.39 seconds with error 00:07:42.368 Verification LBA range: start 0x0 length 0x400 00:07:42.368 Nvme0n1 : 0.39 1944.41 121.53 162.03 0.00 29561.10 1505.77 26838.55 00:07:42.368 [2024-12-16T01:29:13.027Z] =================================================================================================================== 00:07:42.368 [2024-12-16T01:29:13.027Z] Total : 1944.41 121.53 162.03 0.00 29561.10 1505.77 26838.55 00:07:42.368 [2024-12-16 02:29:12.793096] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:42.369 [2024-12-16 02:29:12.793123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d1490 (9): Bad file descriptor 00:07:42.369 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.369 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:42.369 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:42.369 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.369 [2024-12-16 02:29:12.798251] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:42.369 [2024-12-16 02:29:12.798331] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:42.369 [2024-12-16 02:29:12.798352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:42.369 [2024-12-16 02:29:12.798367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:42.369 [2024-12-16 02:29:12.798374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:42.369 [2024-12-16 02:29:12.798381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:42.369 [2024-12-16 02:29:12.798387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11d1490 00:07:42.369 [2024-12-16 02:29:12.798405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11d1490 (9): Bad file descriptor 00:07:42.369 [2024-12-16 02:29:12.798417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:42.369 [2024-12-16 02:29:12.798423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:42.369 [2024-12-16 02:29:12.798431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:42.369 [2024-12-16 02:29:12.798438] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:42.369 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:42.369 02:29:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:43.304 02:29:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 810369 00:07:43.304 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (810369) - No such process 00:07:43.304 02:29:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:43.304 02:29:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:43.304 02:29:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:43.304 02:29:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:43.304 02:29:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:43.304 02:29:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:43.304 02:29:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:43.304 02:29:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:43.304 { 00:07:43.304 "params": { 00:07:43.304 "name": "Nvme$subsystem", 00:07:43.304 "trtype": "$TEST_TRANSPORT", 00:07:43.304 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:43.304 "adrfam": "ipv4", 00:07:43.304 "trsvcid": "$NVMF_PORT", 00:07:43.304 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:43.304 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:43.304 "hdgst": ${hdgst:-false}, 00:07:43.304 "ddgst": ${ddgst:-false} 00:07:43.304 }, 00:07:43.304 "method": "bdev_nvme_attach_controller" 00:07:43.304 } 00:07:43.304 EOF 00:07:43.304 )") 00:07:43.304 02:29:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:43.304 02:29:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:43.304 02:29:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:43.304 02:29:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:43.304 "params": { 00:07:43.304 "name": "Nvme0", 00:07:43.304 "trtype": "tcp", 00:07:43.304 "traddr": "10.0.0.2", 00:07:43.304 "adrfam": "ipv4", 00:07:43.304 "trsvcid": "4420", 00:07:43.304 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:43.304 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:43.304 "hdgst": false, 00:07:43.304 "ddgst": false 00:07:43.304 }, 00:07:43.304 "method": "bdev_nvme_attach_controller" 00:07:43.304 }' 00:07:43.304 [2024-12-16 02:29:13.858127] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:43.304 [2024-12-16 02:29:13.858175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810654 ] 00:07:43.304 [2024-12-16 02:29:13.931946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.304 [2024-12-16 02:29:13.952843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.562 Running I/O for 1 seconds... 00:07:44.498 1984.00 IOPS, 124.00 MiB/s 00:07:44.498 Latency(us) 00:07:44.498 [2024-12-16T01:29:15.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.498 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:44.498 Verification LBA range: start 0x0 length 0x400 00:07:44.498 Nvme0n1 : 1.01 2025.42 126.59 0.00 0.00 31105.37 6210.32 26588.89 00:07:44.498 [2024-12-16T01:29:15.157Z] =================================================================================================================== 00:07:44.498 [2024-12-16T01:29:15.157Z] Total : 2025.42 126.59 0.00 0.00 31105.37 6210.32 26588.89 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:44.757 rmmod nvme_tcp 00:07:44.757 rmmod nvme_fabrics 00:07:44.757 rmmod nvme_keyring 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 810158 ']' 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 810158 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 810158 ']' 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 810158 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 810158 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 810158' 00:07:44.757 killing process with pid 810158 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 810158 00:07:44.757 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 810158 00:07:45.016 [2024-12-16 02:29:15.543594] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:45.016 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:45.016 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:45.016 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:45.016 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:45.016 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:45.016 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:45.016 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:45.016 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:45.016 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:45.016 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.016 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:45.016 02:29:15 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:47.552 00:07:47.552 real 0m12.263s 00:07:47.552 user 0m19.178s 00:07:47.552 sys 0m5.583s 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:47.552 ************************************ 00:07:47.552 END TEST nvmf_host_management 00:07:47.552 ************************************ 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:47.552 ************************************ 00:07:47.552 START TEST nvmf_lvol 00:07:47.552 ************************************ 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:47.552 * Looking for test storage... 00:07:47.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:47.552 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:47.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.553 --rc genhtml_branch_coverage=1 00:07:47.553 --rc genhtml_function_coverage=1 00:07:47.553 --rc genhtml_legend=1 00:07:47.553 --rc geninfo_all_blocks=1 00:07:47.553 --rc geninfo_unexecuted_blocks=1 00:07:47.553 00:07:47.553 ' 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:47.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.553 --rc genhtml_branch_coverage=1 00:07:47.553 --rc genhtml_function_coverage=1 00:07:47.553 --rc genhtml_legend=1 00:07:47.553 --rc geninfo_all_blocks=1 00:07:47.553 --rc geninfo_unexecuted_blocks=1 00:07:47.553 00:07:47.553 ' 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:47.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.553 --rc genhtml_branch_coverage=1 00:07:47.553 --rc genhtml_function_coverage=1 00:07:47.553 --rc genhtml_legend=1 00:07:47.553 --rc geninfo_all_blocks=1 00:07:47.553 --rc geninfo_unexecuted_blocks=1 00:07:47.553 00:07:47.553 ' 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:47.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:47.553 --rc genhtml_branch_coverage=1 00:07:47.553 --rc genhtml_function_coverage=1 00:07:47.553 --rc genhtml_legend=1 00:07:47.553 --rc geninfo_all_blocks=1 00:07:47.553 --rc geninfo_unexecuted_blocks=1 00:07:47.553 00:07:47.553 ' 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:47.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:47.553 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:47.554 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:47.554 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:47.554 02:29:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:54.125 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.125 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:54.126 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:54.126 Found net devices under 0000:af:00.0: cvl_0_0 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:54.126 Found net devices under 0000:af:00.1: cvl_0_1 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:54.126 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.126 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:07:54.126 00:07:54.126 --- 10.0.0.2 ping statistics --- 00:07:54.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.126 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:54.126 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.126 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:07:54.126 00:07:54.126 --- 10.0.0.1 ping statistics --- 00:07:54.126 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.126 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=814367 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 814367 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 814367 ']' 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.126 02:29:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.126 [2024-12-16 02:29:23.978583] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:54.126 [2024-12-16 02:29:23.978637] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:54.126 [2024-12-16 02:29:24.055425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:54.126 [2024-12-16 02:29:24.078212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:54.126 [2024-12-16 02:29:24.078249] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:54.126 [2024-12-16 02:29:24.078257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:54.126 [2024-12-16 02:29:24.078262] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:54.126 [2024-12-16 02:29:24.078268] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:54.126 [2024-12-16 02:29:24.079508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.126 [2024-12-16 02:29:24.079617] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.126 [2024-12-16 02:29:24.079618] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.126 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.126 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:54.126 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:54.126 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:54.126 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:54.126 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:54.126 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:54.126 [2024-12-16 02:29:24.403481] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:54.126 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:54.126 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:54.126 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:54.386 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:54.386 02:29:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:54.644 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:54.644 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=7e940e66-799c-4be0-a3e1-26e3909c7671 00:07:54.644 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 7e940e66-799c-4be0-a3e1-26e3909c7671 lvol 20 00:07:54.901 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c6915ece-85ed-47d7-a54f-ec3fa939d44d 00:07:54.901 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:55.159 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c6915ece-85ed-47d7-a54f-ec3fa939d44d 00:07:55.418 02:29:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:55.418 [2024-12-16 02:29:26.014850] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:55.418 02:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:55.676 02:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=814844 00:07:55.676 02:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:55.676 02:29:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:56.613 02:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c6915ece-85ed-47d7-a54f-ec3fa939d44d MY_SNAPSHOT 00:07:56.871 02:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9f67c13c-935e-45ea-a1e1-3e448c835d10 00:07:56.871 02:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c6915ece-85ed-47d7-a54f-ec3fa939d44d 30 00:07:57.129 02:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9f67c13c-935e-45ea-a1e1-3e448c835d10 MY_CLONE 00:07:57.388 02:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4ac1c05b-7cfa-450b-b03b-ead1961c723e 00:07:57.388 02:29:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4ac1c05b-7cfa-450b-b03b-ead1961c723e 00:07:57.956 02:29:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 814844 00:08:06.075 Initializing NVMe Controllers 00:08:06.075 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:06.075 Controller IO queue size 128, less than required. 00:08:06.075 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:06.075 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:06.075 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:06.075 Initialization complete. Launching workers. 00:08:06.075 ======================================================== 00:08:06.075 Latency(us) 00:08:06.075 Device Information : IOPS MiB/s Average min max 00:08:06.075 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11959.30 46.72 10704.27 1582.98 82909.71 00:08:06.075 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11843.70 46.26 10807.20 3430.19 59227.27 00:08:06.075 ======================================================== 00:08:06.075 Total : 23803.00 92.98 10755.49 1582.98 82909.71 00:08:06.075 00:08:06.075 02:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:06.333 02:29:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c6915ece-85ed-47d7-a54f-ec3fa939d44d 00:08:06.592 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 7e940e66-799c-4be0-a3e1-26e3909c7671 00:08:06.592 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:06.592 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:06.592 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:06.592 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.592 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:06.592 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.592 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:06.592 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.592 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.851 rmmod nvme_tcp 00:08:06.851 rmmod nvme_fabrics 00:08:06.851 rmmod nvme_keyring 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 814367 ']' 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 814367 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 814367 ']' 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 814367 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 814367 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 814367' 00:08:06.851 killing process with pid 814367 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 814367 00:08:06.851 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 814367 00:08:07.111 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:07.111 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:07.111 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:07.111 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:07.111 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:07.111 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:07.111 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:07.111 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:07.111 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:07.111 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.111 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.111 02:29:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.018 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:09.018 00:08:09.018 real 0m21.904s 00:08:09.018 user 1m2.965s 00:08:09.018 sys 0m7.614s 00:08:09.018 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.018 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:09.018 ************************************ 00:08:09.018 END TEST nvmf_lvol 00:08:09.018 ************************************ 00:08:09.018 02:29:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:09.018 02:29:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:09.018 02:29:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.018 02:29:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.278 ************************************ 00:08:09.278 START TEST nvmf_lvs_grow 00:08:09.278 ************************************ 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:09.278 * Looking for test storage... 00:08:09.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:09.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.278 --rc genhtml_branch_coverage=1 00:08:09.278 --rc genhtml_function_coverage=1 00:08:09.278 --rc genhtml_legend=1 00:08:09.278 --rc geninfo_all_blocks=1 00:08:09.278 --rc geninfo_unexecuted_blocks=1 00:08:09.278 00:08:09.278 ' 00:08:09.278 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:09.278 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.278 --rc genhtml_branch_coverage=1 00:08:09.278 --rc genhtml_function_coverage=1 00:08:09.278 --rc genhtml_legend=1 00:08:09.278 --rc geninfo_all_blocks=1 00:08:09.278 --rc geninfo_unexecuted_blocks=1 00:08:09.278 00:08:09.279 ' 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:09.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.279 --rc genhtml_branch_coverage=1 00:08:09.279 --rc genhtml_function_coverage=1 00:08:09.279 --rc genhtml_legend=1 00:08:09.279 --rc geninfo_all_blocks=1 00:08:09.279 --rc geninfo_unexecuted_blocks=1 00:08:09.279 00:08:09.279 ' 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:09.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:09.279 --rc genhtml_branch_coverage=1 00:08:09.279 --rc genhtml_function_coverage=1 00:08:09.279 --rc genhtml_legend=1 00:08:09.279 --rc geninfo_all_blocks=1 00:08:09.279 --rc geninfo_unexecuted_blocks=1 00:08:09.279 00:08:09.279 ' 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:09.279 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:09.279 02:29:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:15.853 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:15.853 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:15.853 Found net devices under 0000:af:00.0: cvl_0_0 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:15.853 Found net devices under 0000:af:00.1: cvl_0_1 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:15.853 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:15.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:15.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.484 ms 00:08:15.854 00:08:15.854 --- 10.0.0.2 ping statistics --- 00:08:15.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.854 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:15.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:15.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:08:15.854 00:08:15.854 --- 10.0.0.1 ping statistics --- 00:08:15.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:15.854 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=820142 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 820142 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 820142 ']' 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:15.854 02:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.854 [2024-12-16 02:29:45.986156] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:15.854 [2024-12-16 02:29:45.986205] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.854 [2024-12-16 02:29:46.065410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.854 [2024-12-16 02:29:46.087355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:15.854 [2024-12-16 02:29:46.087391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:15.854 [2024-12-16 02:29:46.087398] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:15.854 [2024-12-16 02:29:46.087404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:15.854 [2024-12-16 02:29:46.087410] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:15.854 [2024-12-16 02:29:46.087908] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:15.854 [2024-12-16 02:29:46.388012] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:15.854 ************************************ 00:08:15.854 START TEST lvs_grow_clean 00:08:15.854 ************************************ 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:15.854 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:16.113 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:16.113 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:16.372 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8c930551-7544-4ab4-a0fe-120b63ddcd07 00:08:16.372 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:16.372 02:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c930551-7544-4ab4-a0fe-120b63ddcd07 00:08:16.631 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:16.631 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:16.631 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8c930551-7544-4ab4-a0fe-120b63ddcd07 lvol 150 00:08:16.631 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=39d4e5e3-8755-44b7-b948-40a1b4f250c3 00:08:16.631 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:16.631 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:16.889 [2024-12-16 02:29:47.414775] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:16.889 [2024-12-16 02:29:47.414824] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:16.889 true 00:08:16.889 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c930551-7544-4ab4-a0fe-120b63ddcd07 00:08:16.889 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:17.148 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:17.148 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:17.148 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 39d4e5e3-8755-44b7-b948-40a1b4f250c3 00:08:17.407 02:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:17.666 [2024-12-16 02:29:48.120883] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:17.666 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:17.666 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:17.666 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=820607 00:08:17.666 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:17.666 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 820607 /var/tmp/bdevperf.sock 00:08:17.666 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 820607 ']' 00:08:17.666 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:17.666 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:17.666 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:17.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:17.666 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:17.666 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:17.925 [2024-12-16 02:29:48.348104] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:17.925 [2024-12-16 02:29:48.348149] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid820607 ] 00:08:17.925 [2024-12-16 02:29:48.419560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.925 [2024-12-16 02:29:48.441266] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.925 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.925 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:17.925 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:18.493 Nvme0n1 00:08:18.493 02:29:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:18.493 [ 00:08:18.493 { 00:08:18.493 "name": "Nvme0n1", 00:08:18.493 "aliases": [ 00:08:18.493 "39d4e5e3-8755-44b7-b948-40a1b4f250c3" 00:08:18.493 ], 00:08:18.493 "product_name": "NVMe disk", 00:08:18.493 "block_size": 4096, 00:08:18.493 "num_blocks": 38912, 00:08:18.493 "uuid": "39d4e5e3-8755-44b7-b948-40a1b4f250c3", 00:08:18.493 "numa_id": 1, 00:08:18.493 "assigned_rate_limits": { 00:08:18.493 "rw_ios_per_sec": 0, 00:08:18.493 "rw_mbytes_per_sec": 0, 00:08:18.493 "r_mbytes_per_sec": 0, 00:08:18.493 "w_mbytes_per_sec": 0 00:08:18.493 }, 00:08:18.493 "claimed": false, 00:08:18.493 "zoned": false, 00:08:18.493 "supported_io_types": { 00:08:18.493 "read": true, 00:08:18.493 "write": true, 00:08:18.493 "unmap": true, 00:08:18.493 "flush": true, 00:08:18.493 "reset": true, 00:08:18.493 "nvme_admin": true, 00:08:18.493 "nvme_io": true, 00:08:18.493 "nvme_io_md": false, 00:08:18.493 "write_zeroes": true, 00:08:18.493 "zcopy": false, 00:08:18.493 "get_zone_info": false, 00:08:18.493 "zone_management": false, 00:08:18.493 "zone_append": false, 00:08:18.493 "compare": true, 00:08:18.493 "compare_and_write": true, 00:08:18.493 "abort": true, 00:08:18.493 "seek_hole": false, 00:08:18.493 "seek_data": false, 00:08:18.493 "copy": true, 00:08:18.493 "nvme_iov_md": false 00:08:18.493 }, 00:08:18.493 "memory_domains": [ 00:08:18.493 { 00:08:18.493 "dma_device_id": "system", 00:08:18.493 "dma_device_type": 1 00:08:18.493 } 00:08:18.493 ], 00:08:18.493 "driver_specific": { 00:08:18.493 "nvme": [ 00:08:18.493 { 00:08:18.493 "trid": { 00:08:18.493 "trtype": "TCP", 00:08:18.493 "adrfam": "IPv4", 00:08:18.493 "traddr": "10.0.0.2", 00:08:18.493 "trsvcid": "4420", 00:08:18.493 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:18.493 }, 00:08:18.493 "ctrlr_data": { 00:08:18.493 "cntlid": 1, 00:08:18.493 "vendor_id": "0x8086", 00:08:18.493 "model_number": "SPDK bdev Controller", 00:08:18.493 "serial_number": "SPDK0", 00:08:18.493 "firmware_revision": "25.01", 00:08:18.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:18.493 "oacs": { 00:08:18.493 "security": 0, 00:08:18.493 "format": 0, 00:08:18.493 "firmware": 0, 00:08:18.493 "ns_manage": 0 00:08:18.493 }, 00:08:18.493 "multi_ctrlr": true, 00:08:18.493 "ana_reporting": false 00:08:18.493 }, 00:08:18.493 "vs": { 00:08:18.493 "nvme_version": "1.3" 00:08:18.493 }, 00:08:18.493 "ns_data": { 00:08:18.493 "id": 1, 00:08:18.493 "can_share": true 00:08:18.493 } 00:08:18.493 } 00:08:18.493 ], 00:08:18.493 "mp_policy": "active_passive" 00:08:18.493 } 00:08:18.493 } 00:08:18.493 ] 00:08:18.493 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=820828 00:08:18.493 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:18.493 02:29:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:18.752 Running I/O for 10 seconds... 00:08:19.687 Latency(us) 00:08:19.687 [2024-12-16T01:29:50.346Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:19.687 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.687 Nvme0n1 : 1.00 23120.00 90.31 0.00 0.00 0.00 0.00 0.00 00:08:19.687 [2024-12-16T01:29:50.346Z] =================================================================================================================== 00:08:19.687 [2024-12-16T01:29:50.346Z] Total : 23120.00 90.31 0.00 0.00 0.00 0.00 0.00 00:08:19.687 00:08:20.626 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8c930551-7544-4ab4-a0fe-120b63ddcd07 00:08:20.626 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.626 Nvme0n1 : 2.00 23254.00 90.84 0.00 0.00 0.00 0.00 0.00 00:08:20.626 [2024-12-16T01:29:51.285Z] =================================================================================================================== 00:08:20.626 [2024-12-16T01:29:51.286Z] Total : 23254.00 90.84 0.00 0.00 0.00 0.00 0.00 00:08:20.627 00:08:20.885 true 00:08:20.885 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c930551-7544-4ab4-a0fe-120b63ddcd07 00:08:20.885 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:20.885 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:20.885 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:20.885 02:29:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 820828 00:08:21.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.820 Nvme0n1 : 3.00 23356.00 91.23 0.00 0.00 0.00 0.00 0.00 00:08:21.820 [2024-12-16T01:29:52.479Z] =================================================================================================================== 00:08:21.820 [2024-12-16T01:29:52.479Z] Total : 23356.00 91.23 0.00 0.00 0.00 0.00 0.00 00:08:21.820 00:08:22.757 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.757 Nvme0n1 : 4.00 23492.00 91.77 0.00 0.00 0.00 0.00 0.00 00:08:22.757 [2024-12-16T01:29:53.416Z] =================================================================================================================== 00:08:22.757 [2024-12-16T01:29:53.416Z] Total : 23492.00 91.77 0.00 0.00 0.00 0.00 0.00 00:08:22.757 00:08:23.693 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.693 Nvme0n1 : 5.00 23552.00 92.00 0.00 0.00 0.00 0.00 0.00 00:08:23.693 [2024-12-16T01:29:54.352Z] =================================================================================================================== 00:08:23.693 [2024-12-16T01:29:54.352Z] Total : 23552.00 92.00 0.00 0.00 0.00 0.00 0.00 00:08:23.693 00:08:24.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:24.634 Nvme0n1 : 6.00 23596.17 92.17 0.00 0.00 0.00 0.00 0.00 00:08:24.634 [2024-12-16T01:29:55.293Z] =================================================================================================================== 00:08:24.634 [2024-12-16T01:29:55.293Z] Total : 23596.17 92.17 0.00 0.00 0.00 0.00 0.00 00:08:24.634 00:08:25.570 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:25.570 Nvme0n1 : 7.00 23639.00 92.34 0.00 0.00 0.00 0.00 0.00 00:08:25.570 [2024-12-16T01:29:56.229Z] =================================================================================================================== 00:08:25.570 [2024-12-16T01:29:56.229Z] Total : 23639.00 92.34 0.00 0.00 0.00 0.00 0.00 00:08:25.570 00:08:26.947 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:26.947 Nvme0n1 : 8.00 23678.00 92.49 0.00 0.00 0.00 0.00 0.00 00:08:26.947 [2024-12-16T01:29:57.606Z] =================================================================================================================== 00:08:26.947 [2024-12-16T01:29:57.606Z] Total : 23678.00 92.49 0.00 0.00 0.00 0.00 0.00 00:08:26.947 00:08:27.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:27.882 Nvme0n1 : 9.00 23699.56 92.58 0.00 0.00 0.00 0.00 0.00 00:08:27.882 [2024-12-16T01:29:58.541Z] =================================================================================================================== 00:08:27.882 [2024-12-16T01:29:58.541Z] Total : 23699.56 92.58 0.00 0.00 0.00 0.00 0.00 00:08:27.882 00:08:28.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.818 Nvme0n1 : 10.00 23721.40 92.66 0.00 0.00 0.00 0.00 0.00 00:08:28.818 [2024-12-16T01:29:59.477Z] =================================================================================================================== 00:08:28.818 [2024-12-16T01:29:59.477Z] Total : 23721.40 92.66 0.00 0.00 0.00 0.00 0.00 00:08:28.818 00:08:28.818 00:08:28.818 Latency(us) 00:08:28.818 [2024-12-16T01:29:59.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.818 Nvme0n1 : 10.00 23723.98 92.67 0.00 0.00 5392.25 3198.78 10860.25 00:08:28.818 [2024-12-16T01:29:59.477Z] =================================================================================================================== 00:08:28.818 [2024-12-16T01:29:59.477Z] Total : 23723.98 92.67 0.00 0.00 5392.25 3198.78 10860.25 00:08:28.818 { 00:08:28.818 "results": [ 00:08:28.818 { 00:08:28.818 "job": "Nvme0n1", 00:08:28.818 "core_mask": "0x2", 00:08:28.818 "workload": "randwrite", 00:08:28.818 "status": "finished", 00:08:28.818 "queue_depth": 128, 00:08:28.818 "io_size": 4096, 00:08:28.818 "runtime": 10.004309, 00:08:28.818 "iops": 23723.977338164983, 00:08:28.818 "mibps": 92.67178647720696, 00:08:28.818 "io_failed": 0, 00:08:28.818 "io_timeout": 0, 00:08:28.818 "avg_latency_us": 5392.250142775685, 00:08:28.818 "min_latency_us": 3198.7809523809524, 00:08:28.818 "max_latency_us": 10860.251428571428 00:08:28.818 } 00:08:28.818 ], 00:08:28.818 "core_count": 1 00:08:28.818 } 00:08:28.818 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 820607 00:08:28.818 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 820607 ']' 00:08:28.818 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 820607 00:08:28.818 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:28.818 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:28.818 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 820607 00:08:28.818 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:28.818 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:28.818 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 820607' 00:08:28.818 killing process with pid 820607 00:08:28.818 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 820607 00:08:28.818 Received shutdown signal, test time was about 10.000000 seconds 00:08:28.818 00:08:28.818 Latency(us) 00:08:28.818 [2024-12-16T01:29:59.477Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.818 [2024-12-16T01:29:59.477Z] =================================================================================================================== 00:08:28.818 [2024-12-16T01:29:59.477Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:28.818 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 820607 00:08:28.818 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:29.076 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:29.334 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c930551-7544-4ab4-a0fe-120b63ddcd07 00:08:29.334 02:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:29.593 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:29.593 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:29.593 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:29.851 [2024-12-16 02:30:00.289552] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:29.851 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c930551-7544-4ab4-a0fe-120b63ddcd07 00:08:29.851 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:29.852 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c930551-7544-4ab4-a0fe-120b63ddcd07 00:08:29.852 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.852 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.852 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.852 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.852 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.852 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:29.852 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:29.852 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:29.852 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c930551-7544-4ab4-a0fe-120b63ddcd07 00:08:30.111 request: 00:08:30.111 { 00:08:30.111 "uuid": "8c930551-7544-4ab4-a0fe-120b63ddcd07", 00:08:30.111 "method": "bdev_lvol_get_lvstores", 00:08:30.111 "req_id": 1 00:08:30.111 } 00:08:30.111 Got JSON-RPC error response 00:08:30.111 response: 00:08:30.111 { 00:08:30.111 "code": -19, 00:08:30.111 "message": "No such device" 00:08:30.111 } 00:08:30.111 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:30.111 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:30.111 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:30.111 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:30.111 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:30.111 aio_bdev 00:08:30.111 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 39d4e5e3-8755-44b7-b948-40a1b4f250c3 00:08:30.111 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=39d4e5e3-8755-44b7-b948-40a1b4f250c3 00:08:30.111 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:30.111 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:30.111 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:30.111 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:30.111 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:30.370 02:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 39d4e5e3-8755-44b7-b948-40a1b4f250c3 -t 2000 00:08:30.628 [ 00:08:30.628 { 00:08:30.628 "name": "39d4e5e3-8755-44b7-b948-40a1b4f250c3", 00:08:30.628 "aliases": [ 00:08:30.628 "lvs/lvol" 00:08:30.628 ], 00:08:30.628 "product_name": "Logical Volume", 00:08:30.628 "block_size": 4096, 00:08:30.628 "num_blocks": 38912, 00:08:30.628 "uuid": "39d4e5e3-8755-44b7-b948-40a1b4f250c3", 00:08:30.628 "assigned_rate_limits": { 00:08:30.628 "rw_ios_per_sec": 0, 00:08:30.628 "rw_mbytes_per_sec": 0, 00:08:30.628 "r_mbytes_per_sec": 0, 00:08:30.628 "w_mbytes_per_sec": 0 00:08:30.628 }, 00:08:30.628 "claimed": false, 00:08:30.628 "zoned": false, 00:08:30.628 "supported_io_types": { 00:08:30.628 "read": true, 00:08:30.628 "write": true, 00:08:30.628 "unmap": true, 00:08:30.628 "flush": false, 00:08:30.628 "reset": true, 00:08:30.628 "nvme_admin": false, 00:08:30.628 "nvme_io": false, 00:08:30.628 "nvme_io_md": false, 00:08:30.628 "write_zeroes": true, 00:08:30.628 "zcopy": false, 00:08:30.628 "get_zone_info": false, 00:08:30.628 "zone_management": false, 00:08:30.628 "zone_append": false, 00:08:30.628 "compare": false, 00:08:30.628 "compare_and_write": false, 00:08:30.628 "abort": false, 00:08:30.628 "seek_hole": true, 00:08:30.628 "seek_data": true, 00:08:30.628 "copy": false, 00:08:30.628 "nvme_iov_md": false 00:08:30.628 }, 00:08:30.628 "driver_specific": { 00:08:30.628 "lvol": { 00:08:30.628 "lvol_store_uuid": "8c930551-7544-4ab4-a0fe-120b63ddcd07", 00:08:30.628 "base_bdev": "aio_bdev", 00:08:30.628 "thin_provision": false, 00:08:30.628 "num_allocated_clusters": 38, 00:08:30.628 "snapshot": false, 00:08:30.628 "clone": false, 00:08:30.628 "esnap_clone": false 00:08:30.628 } 00:08:30.628 } 00:08:30.628 } 00:08:30.628 ] 00:08:30.628 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:30.628 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c930551-7544-4ab4-a0fe-120b63ddcd07 00:08:30.629 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:30.888 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:30.888 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:30.888 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8c930551-7544-4ab4-a0fe-120b63ddcd07 00:08:30.888 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:30.888 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 39d4e5e3-8755-44b7-b948-40a1b4f250c3 00:08:31.147 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8c930551-7544-4ab4-a0fe-120b63ddcd07 00:08:31.405 02:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.664 00:08:31.664 real 0m15.704s 00:08:31.664 user 0m15.283s 00:08:31.664 sys 0m1.477s 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:31.664 ************************************ 00:08:31.664 END TEST lvs_grow_clean 00:08:31.664 ************************************ 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.664 ************************************ 00:08:31.664 START TEST lvs_grow_dirty 00:08:31.664 ************************************ 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:31.664 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:31.923 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:31.923 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:32.182 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=028b321e-112d-4788-a377-43293eedf177 00:08:32.182 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 028b321e-112d-4788-a377-43293eedf177 00:08:32.182 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:32.441 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:32.441 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:32.441 02:30:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 028b321e-112d-4788-a377-43293eedf177 lvol 150 00:08:32.441 02:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=27cd29ff-8d85-45f1-9b17-8b59e2e31360 00:08:32.441 02:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.441 02:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:32.698 [2024-12-16 02:30:03.191516] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:32.698 [2024-12-16 02:30:03.191563] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:32.698 true 00:08:32.698 02:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 028b321e-112d-4788-a377-43293eedf177 00:08:32.698 02:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:32.956 02:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:32.956 02:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:32.956 02:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 27cd29ff-8d85-45f1-9b17-8b59e2e31360 00:08:33.215 02:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:33.474 [2024-12-16 02:30:03.941720] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.474 02:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.474 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=823483 00:08:33.474 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:33.474 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:33.474 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 823483 /var/tmp/bdevperf.sock 00:08:33.474 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 823483 ']' 00:08:33.474 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.474 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:33.474 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.474 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:33.474 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:33.734 [2024-12-16 02:30:04.175642] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:33.734 [2024-12-16 02:30:04.175689] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823483 ] 00:08:33.734 [2024-12-16 02:30:04.248919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.734 [2024-12-16 02:30:04.271315] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.734 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.734 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:33.734 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:33.993 Nvme0n1 00:08:34.252 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:34.252 [ 00:08:34.252 { 00:08:34.252 "name": "Nvme0n1", 00:08:34.252 "aliases": [ 00:08:34.252 "27cd29ff-8d85-45f1-9b17-8b59e2e31360" 00:08:34.252 ], 00:08:34.252 "product_name": "NVMe disk", 00:08:34.252 "block_size": 4096, 00:08:34.252 "num_blocks": 38912, 00:08:34.252 "uuid": "27cd29ff-8d85-45f1-9b17-8b59e2e31360", 00:08:34.252 "numa_id": 1, 00:08:34.252 "assigned_rate_limits": { 00:08:34.252 "rw_ios_per_sec": 0, 00:08:34.252 "rw_mbytes_per_sec": 0, 00:08:34.252 "r_mbytes_per_sec": 0, 00:08:34.252 "w_mbytes_per_sec": 0 00:08:34.252 }, 00:08:34.252 "claimed": false, 00:08:34.252 "zoned": false, 00:08:34.252 "supported_io_types": { 00:08:34.252 "read": true, 00:08:34.252 "write": true, 00:08:34.252 "unmap": true, 00:08:34.252 "flush": true, 00:08:34.252 "reset": true, 00:08:34.252 "nvme_admin": true, 00:08:34.252 "nvme_io": true, 00:08:34.252 "nvme_io_md": false, 00:08:34.252 "write_zeroes": true, 00:08:34.252 "zcopy": false, 00:08:34.252 "get_zone_info": false, 00:08:34.252 "zone_management": false, 00:08:34.252 "zone_append": false, 00:08:34.252 "compare": true, 00:08:34.252 "compare_and_write": true, 00:08:34.252 "abort": true, 00:08:34.252 "seek_hole": false, 00:08:34.252 "seek_data": false, 00:08:34.252 "copy": true, 00:08:34.252 "nvme_iov_md": false 00:08:34.252 }, 00:08:34.252 "memory_domains": [ 00:08:34.252 { 00:08:34.252 "dma_device_id": "system", 00:08:34.252 "dma_device_type": 1 00:08:34.252 } 00:08:34.252 ], 00:08:34.252 "driver_specific": { 00:08:34.252 "nvme": [ 00:08:34.252 { 00:08:34.252 "trid": { 00:08:34.252 "trtype": "TCP", 00:08:34.252 "adrfam": "IPv4", 00:08:34.252 "traddr": "10.0.0.2", 00:08:34.252 "trsvcid": "4420", 00:08:34.252 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:34.252 }, 00:08:34.252 "ctrlr_data": { 00:08:34.252 "cntlid": 1, 00:08:34.252 "vendor_id": "0x8086", 00:08:34.252 "model_number": "SPDK bdev Controller", 00:08:34.252 "serial_number": "SPDK0", 00:08:34.252 "firmware_revision": "25.01", 00:08:34.252 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:34.252 "oacs": { 00:08:34.252 "security": 0, 00:08:34.252 "format": 0, 00:08:34.252 "firmware": 0, 00:08:34.252 "ns_manage": 0 00:08:34.252 }, 00:08:34.252 "multi_ctrlr": true, 00:08:34.252 "ana_reporting": false 00:08:34.252 }, 00:08:34.252 "vs": { 00:08:34.252 "nvme_version": "1.3" 00:08:34.252 }, 00:08:34.252 "ns_data": { 00:08:34.252 "id": 1, 00:08:34.252 "can_share": true 00:08:34.252 } 00:08:34.252 } 00:08:34.252 ], 00:08:34.252 "mp_policy": "active_passive" 00:08:34.252 } 00:08:34.252 } 00:08:34.252 ] 00:08:34.252 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=823495 00:08:34.252 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:34.252 02:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:34.513 Running I/O for 10 seconds... 00:08:35.449 Latency(us) 00:08:35.449 [2024-12-16T01:30:06.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:35.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.449 Nvme0n1 : 1.00 23316.00 91.08 0.00 0.00 0.00 0.00 0.00 00:08:35.449 [2024-12-16T01:30:06.108Z] =================================================================================================================== 00:08:35.449 [2024-12-16T01:30:06.108Z] Total : 23316.00 91.08 0.00 0.00 0.00 0.00 0.00 00:08:35.449 00:08:36.461 02:30:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 028b321e-112d-4788-a377-43293eedf177 00:08:36.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.461 Nvme0n1 : 2.00 23366.00 91.27 0.00 0.00 0.00 0.00 0.00 00:08:36.461 [2024-12-16T01:30:07.120Z] =================================================================================================================== 00:08:36.461 [2024-12-16T01:30:07.120Z] Total : 23366.00 91.27 0.00 0.00 0.00 0.00 0.00 00:08:36.461 00:08:36.461 true 00:08:36.461 02:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 028b321e-112d-4788-a377-43293eedf177 00:08:36.461 02:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:36.742 02:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:36.742 02:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:36.742 02:30:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 823495 00:08:37.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.353 Nvme0n1 : 3.00 23414.67 91.46 0.00 0.00 0.00 0.00 0.00 00:08:37.353 [2024-12-16T01:30:08.012Z] =================================================================================================================== 00:08:37.353 [2024-12-16T01:30:08.012Z] Total : 23414.67 91.46 0.00 0.00 0.00 0.00 0.00 00:08:37.353 00:08:38.728 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.728 Nvme0n1 : 4.00 23531.25 91.92 0.00 0.00 0.00 0.00 0.00 00:08:38.728 [2024-12-16T01:30:09.387Z] =================================================================================================================== 00:08:38.728 [2024-12-16T01:30:09.387Z] Total : 23531.25 91.92 0.00 0.00 0.00 0.00 0.00 00:08:38.728 00:08:39.664 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.664 Nvme0n1 : 5.00 23600.60 92.19 0.00 0.00 0.00 0.00 0.00 00:08:39.664 [2024-12-16T01:30:10.323Z] =================================================================================================================== 00:08:39.664 [2024-12-16T01:30:10.323Z] Total : 23600.60 92.19 0.00 0.00 0.00 0.00 0.00 00:08:39.664 00:08:40.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.600 Nvme0n1 : 6.00 23575.83 92.09 0.00 0.00 0.00 0.00 0.00 00:08:40.600 [2024-12-16T01:30:11.259Z] =================================================================================================================== 00:08:40.600 [2024-12-16T01:30:11.259Z] Total : 23575.83 92.09 0.00 0.00 0.00 0.00 0.00 00:08:40.600 00:08:41.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.537 Nvme0n1 : 7.00 23625.57 92.29 0.00 0.00 0.00 0.00 0.00 00:08:41.537 [2024-12-16T01:30:12.196Z] =================================================================================================================== 00:08:41.537 [2024-12-16T01:30:12.196Z] Total : 23625.57 92.29 0.00 0.00 0.00 0.00 0.00 00:08:41.537 00:08:42.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.473 Nvme0n1 : 8.00 23662.25 92.43 0.00 0.00 0.00 0.00 0.00 00:08:42.473 [2024-12-16T01:30:13.132Z] =================================================================================================================== 00:08:42.473 [2024-12-16T01:30:13.132Z] Total : 23662.25 92.43 0.00 0.00 0.00 0.00 0.00 00:08:42.473 00:08:43.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.410 Nvme0n1 : 9.00 23693.56 92.55 0.00 0.00 0.00 0.00 0.00 00:08:43.410 [2024-12-16T01:30:14.069Z] =================================================================================================================== 00:08:43.410 [2024-12-16T01:30:14.069Z] Total : 23693.56 92.55 0.00 0.00 0.00 0.00 0.00 00:08:43.410 00:08:44.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.346 Nvme0n1 : 10.00 23714.90 92.64 0.00 0.00 0.00 0.00 0.00 00:08:44.346 [2024-12-16T01:30:15.005Z] =================================================================================================================== 00:08:44.346 [2024-12-16T01:30:15.005Z] Total : 23714.90 92.64 0.00 0.00 0.00 0.00 0.00 00:08:44.346 00:08:44.346 00:08:44.346 Latency(us) 00:08:44.346 [2024-12-16T01:30:15.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.346 Nvme0n1 : 10.00 23717.92 92.65 0.00 0.00 5393.91 3151.97 14605.17 00:08:44.346 [2024-12-16T01:30:15.005Z] =================================================================================================================== 00:08:44.346 [2024-12-16T01:30:15.005Z] Total : 23717.92 92.65 0.00 0.00 5393.91 3151.97 14605.17 00:08:44.346 { 00:08:44.346 "results": [ 00:08:44.346 { 00:08:44.346 "job": "Nvme0n1", 00:08:44.346 "core_mask": "0x2", 00:08:44.346 "workload": "randwrite", 00:08:44.346 "status": "finished", 00:08:44.346 "queue_depth": 128, 00:08:44.346 "io_size": 4096, 00:08:44.346 "runtime": 10.004123, 00:08:44.346 "iops": 23717.921101130003, 00:08:44.346 "mibps": 92.64812930128907, 00:08:44.346 "io_failed": 0, 00:08:44.346 "io_timeout": 0, 00:08:44.346 "avg_latency_us": 5393.908510659733, 00:08:44.346 "min_latency_us": 3151.9695238095237, 00:08:44.346 "max_latency_us": 14605.165714285715 00:08:44.346 } 00:08:44.346 ], 00:08:44.346 "core_count": 1 00:08:44.346 } 00:08:44.346 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 823483 00:08:44.346 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 823483 ']' 00:08:44.346 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 823483 00:08:44.346 02:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:44.346 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.346 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 823483 00:08:44.607 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:44.607 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:44.607 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 823483' 00:08:44.607 killing process with pid 823483 00:08:44.607 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 823483 00:08:44.607 Received shutdown signal, test time was about 10.000000 seconds 00:08:44.607 00:08:44.607 Latency(us) 00:08:44.607 [2024-12-16T01:30:15.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.607 [2024-12-16T01:30:15.266Z] =================================================================================================================== 00:08:44.607 [2024-12-16T01:30:15.266Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:44.607 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 823483 00:08:44.607 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:44.866 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.125 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 028b321e-112d-4788-a377-43293eedf177 00:08:45.125 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 820142 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 820142 00:08:45.384 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 820142 Killed "${NVMF_APP[@]}" "$@" 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=825700 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 825700 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 825700 ']' 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.384 02:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.384 [2024-12-16 02:30:15.885322] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:45.384 [2024-12-16 02:30:15.885370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.384 [2024-12-16 02:30:15.962674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.384 [2024-12-16 02:30:15.983833] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.384 [2024-12-16 02:30:15.983875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.384 [2024-12-16 02:30:15.983886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.384 [2024-12-16 02:30:15.983892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.384 [2024-12-16 02:30:15.983898] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.384 [2024-12-16 02:30:15.984404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.643 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.643 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:45.643 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.643 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.643 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:45.643 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.643 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:45.643 [2024-12-16 02:30:16.289927] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:45.643 [2024-12-16 02:30:16.290008] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:45.643 [2024-12-16 02:30:16.290032] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:45.902 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:45.902 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 27cd29ff-8d85-45f1-9b17-8b59e2e31360 00:08:45.902 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=27cd29ff-8d85-45f1-9b17-8b59e2e31360 00:08:45.902 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:45.902 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:45.902 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:45.902 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:45.902 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:45.902 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 27cd29ff-8d85-45f1-9b17-8b59e2e31360 -t 2000 00:08:46.161 [ 00:08:46.161 { 00:08:46.161 "name": "27cd29ff-8d85-45f1-9b17-8b59e2e31360", 00:08:46.161 "aliases": [ 00:08:46.161 "lvs/lvol" 00:08:46.161 ], 00:08:46.161 "product_name": "Logical Volume", 00:08:46.161 "block_size": 4096, 00:08:46.161 "num_blocks": 38912, 00:08:46.161 "uuid": "27cd29ff-8d85-45f1-9b17-8b59e2e31360", 00:08:46.161 "assigned_rate_limits": { 00:08:46.161 "rw_ios_per_sec": 0, 00:08:46.161 "rw_mbytes_per_sec": 0, 00:08:46.161 "r_mbytes_per_sec": 0, 00:08:46.161 "w_mbytes_per_sec": 0 00:08:46.161 }, 00:08:46.161 "claimed": false, 00:08:46.161 "zoned": false, 00:08:46.161 "supported_io_types": { 00:08:46.161 "read": true, 00:08:46.161 "write": true, 00:08:46.161 "unmap": true, 00:08:46.161 "flush": false, 00:08:46.161 "reset": true, 00:08:46.161 "nvme_admin": false, 00:08:46.161 "nvme_io": false, 00:08:46.161 "nvme_io_md": false, 00:08:46.161 "write_zeroes": true, 00:08:46.161 "zcopy": false, 00:08:46.161 "get_zone_info": false, 00:08:46.161 "zone_management": false, 00:08:46.161 "zone_append": false, 00:08:46.161 "compare": false, 00:08:46.161 "compare_and_write": false, 00:08:46.161 "abort": false, 00:08:46.161 "seek_hole": true, 00:08:46.161 "seek_data": true, 00:08:46.161 "copy": false, 00:08:46.161 "nvme_iov_md": false 00:08:46.161 }, 00:08:46.161 "driver_specific": { 00:08:46.161 "lvol": { 00:08:46.161 "lvol_store_uuid": "028b321e-112d-4788-a377-43293eedf177", 00:08:46.161 "base_bdev": "aio_bdev", 00:08:46.161 "thin_provision": false, 00:08:46.161 "num_allocated_clusters": 38, 00:08:46.161 "snapshot": false, 00:08:46.161 "clone": false, 00:08:46.161 "esnap_clone": false 00:08:46.161 } 00:08:46.161 } 00:08:46.161 } 00:08:46.161 ] 00:08:46.161 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:46.161 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 028b321e-112d-4788-a377-43293eedf177 00:08:46.161 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:46.420 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:46.420 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 028b321e-112d-4788-a377-43293eedf177 00:08:46.420 02:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:46.420 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:46.420 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.680 [2024-12-16 02:30:17.234799] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:46.680 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 028b321e-112d-4788-a377-43293eedf177 00:08:46.680 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:46.680 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 028b321e-112d-4788-a377-43293eedf177 00:08:46.680 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.680 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.681 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.681 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.681 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.681 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:46.681 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.681 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:46.681 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 028b321e-112d-4788-a377-43293eedf177 00:08:46.939 request: 00:08:46.939 { 00:08:46.939 "uuid": "028b321e-112d-4788-a377-43293eedf177", 00:08:46.939 "method": "bdev_lvol_get_lvstores", 00:08:46.939 "req_id": 1 00:08:46.939 } 00:08:46.939 Got JSON-RPC error response 00:08:46.939 response: 00:08:46.939 { 00:08:46.939 "code": -19, 00:08:46.939 "message": "No such device" 00:08:46.939 } 00:08:46.939 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:46.939 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:46.939 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:46.939 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:46.940 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:47.199 aio_bdev 00:08:47.199 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 27cd29ff-8d85-45f1-9b17-8b59e2e31360 00:08:47.199 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=27cd29ff-8d85-45f1-9b17-8b59e2e31360 00:08:47.199 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:47.199 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:47.199 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:47.199 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:47.199 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.199 02:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 27cd29ff-8d85-45f1-9b17-8b59e2e31360 -t 2000 00:08:47.457 [ 00:08:47.457 { 00:08:47.457 "name": "27cd29ff-8d85-45f1-9b17-8b59e2e31360", 00:08:47.457 "aliases": [ 00:08:47.457 "lvs/lvol" 00:08:47.457 ], 00:08:47.457 "product_name": "Logical Volume", 00:08:47.457 "block_size": 4096, 00:08:47.457 "num_blocks": 38912, 00:08:47.458 "uuid": "27cd29ff-8d85-45f1-9b17-8b59e2e31360", 00:08:47.458 "assigned_rate_limits": { 00:08:47.458 "rw_ios_per_sec": 0, 00:08:47.458 "rw_mbytes_per_sec": 0, 00:08:47.458 "r_mbytes_per_sec": 0, 00:08:47.458 "w_mbytes_per_sec": 0 00:08:47.458 }, 00:08:47.458 "claimed": false, 00:08:47.458 "zoned": false, 00:08:47.458 "supported_io_types": { 00:08:47.458 "read": true, 00:08:47.458 "write": true, 00:08:47.458 "unmap": true, 00:08:47.458 "flush": false, 00:08:47.458 "reset": true, 00:08:47.458 "nvme_admin": false, 00:08:47.458 "nvme_io": false, 00:08:47.458 "nvme_io_md": false, 00:08:47.458 "write_zeroes": true, 00:08:47.458 "zcopy": false, 00:08:47.458 "get_zone_info": false, 00:08:47.458 "zone_management": false, 00:08:47.458 "zone_append": false, 00:08:47.458 "compare": false, 00:08:47.458 "compare_and_write": false, 00:08:47.458 "abort": false, 00:08:47.458 "seek_hole": true, 00:08:47.458 "seek_data": true, 00:08:47.458 "copy": false, 00:08:47.458 "nvme_iov_md": false 00:08:47.458 }, 00:08:47.458 "driver_specific": { 00:08:47.458 "lvol": { 00:08:47.458 "lvol_store_uuid": "028b321e-112d-4788-a377-43293eedf177", 00:08:47.458 "base_bdev": "aio_bdev", 00:08:47.458 "thin_provision": false, 00:08:47.458 "num_allocated_clusters": 38, 00:08:47.458 "snapshot": false, 00:08:47.458 "clone": false, 00:08:47.458 "esnap_clone": false 00:08:47.458 } 00:08:47.458 } 00:08:47.458 } 00:08:47.458 ] 00:08:47.458 02:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:47.458 02:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 028b321e-112d-4788-a377-43293eedf177 00:08:47.458 02:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:47.716 02:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:47.716 02:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 028b321e-112d-4788-a377-43293eedf177 00:08:47.716 02:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:47.975 02:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:47.975 02:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 27cd29ff-8d85-45f1-9b17-8b59e2e31360 00:08:47.975 02:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 028b321e-112d-4788-a377-43293eedf177 00:08:48.234 02:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.492 00:08:48.492 real 0m16.836s 00:08:48.492 user 0m43.516s 00:08:48.492 sys 0m3.856s 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:48.492 ************************************ 00:08:48.492 END TEST lvs_grow_dirty 00:08:48.492 ************************************ 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:48.492 nvmf_trace.0 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.492 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.492 rmmod nvme_tcp 00:08:48.751 rmmod nvme_fabrics 00:08:48.751 rmmod nvme_keyring 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 825700 ']' 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 825700 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 825700 ']' 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 825700 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 825700 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 825700' 00:08:48.751 killing process with pid 825700 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 825700 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 825700 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:48.751 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:49.010 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:49.010 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:49.010 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:49.010 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:49.010 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:49.010 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.010 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.010 02:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.915 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:50.915 00:08:50.915 real 0m41.784s 00:08:50.915 user 1m4.431s 00:08:50.915 sys 0m10.230s 00:08:50.915 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:50.915 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:50.915 ************************************ 00:08:50.915 END TEST nvmf_lvs_grow 00:08:50.915 ************************************ 00:08:50.915 02:30:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:50.915 02:30:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:50.915 02:30:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:50.915 02:30:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:50.915 ************************************ 00:08:50.915 START TEST nvmf_bdev_io_wait 00:08:50.915 ************************************ 00:08:50.915 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:51.175 * Looking for test storage... 00:08:51.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:51.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.175 --rc genhtml_branch_coverage=1 00:08:51.175 --rc genhtml_function_coverage=1 00:08:51.175 --rc genhtml_legend=1 00:08:51.175 --rc geninfo_all_blocks=1 00:08:51.175 --rc geninfo_unexecuted_blocks=1 00:08:51.175 00:08:51.175 ' 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:51.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.175 --rc genhtml_branch_coverage=1 00:08:51.175 --rc genhtml_function_coverage=1 00:08:51.175 --rc genhtml_legend=1 00:08:51.175 --rc geninfo_all_blocks=1 00:08:51.175 --rc geninfo_unexecuted_blocks=1 00:08:51.175 00:08:51.175 ' 00:08:51.175 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:51.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.175 --rc genhtml_branch_coverage=1 00:08:51.175 --rc genhtml_function_coverage=1 00:08:51.175 --rc genhtml_legend=1 00:08:51.175 --rc geninfo_all_blocks=1 00:08:51.175 --rc geninfo_unexecuted_blocks=1 00:08:51.175 00:08:51.175 ' 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:51.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.176 --rc genhtml_branch_coverage=1 00:08:51.176 --rc genhtml_function_coverage=1 00:08:51.176 --rc genhtml_legend=1 00:08:51.176 --rc geninfo_all_blocks=1 00:08:51.176 --rc geninfo_unexecuted_blocks=1 00:08:51.176 00:08:51.176 ' 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:51.176 02:30:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:57.747 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:57.747 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:57.747 Found net devices under 0000:af:00.0: cvl_0_0 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:57.747 Found net devices under 0000:af:00.1: cvl_0_1 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:57.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:08:57.747 00:08:57.747 --- 10.0.0.2 ping statistics --- 00:08:57.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.747 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:08:57.747 00:08:57.747 --- 10.0.0.1 ping statistics --- 00:08:57.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.747 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=829901 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 829901 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 829901 ']' 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.747 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:57.748 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.748 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:57.748 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.748 [2024-12-16 02:30:27.837492] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:57.748 [2024-12-16 02:30:27.837539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.748 [2024-12-16 02:30:27.913940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.748 [2024-12-16 02:30:27.937362] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.748 [2024-12-16 02:30:27.937403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.748 [2024-12-16 02:30:27.937409] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.748 [2024-12-16 02:30:27.937415] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.748 [2024-12-16 02:30:27.937421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.748 [2024-12-16 02:30:27.938834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.748 [2024-12-16 02:30:27.938944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.748 [2024-12-16 02:30:27.938977] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.748 [2024-12-16 02:30:27.938978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.748 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.748 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:57.748 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.748 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:57.748 02:30:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.748 [2024-12-16 02:30:28.102207] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.748 Malloc0 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:57.748 [2024-12-16 02:30:28.153116] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=829928 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=829930 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.748 { 00:08:57.748 "params": { 00:08:57.748 "name": "Nvme$subsystem", 00:08:57.748 "trtype": "$TEST_TRANSPORT", 00:08:57.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.748 "adrfam": "ipv4", 00:08:57.748 "trsvcid": "$NVMF_PORT", 00:08:57.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.748 "hdgst": ${hdgst:-false}, 00:08:57.748 "ddgst": ${ddgst:-false} 00:08:57.748 }, 00:08:57.748 "method": "bdev_nvme_attach_controller" 00:08:57.748 } 00:08:57.748 EOF 00:08:57.748 )") 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=829932 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.748 { 00:08:57.748 "params": { 00:08:57.748 "name": "Nvme$subsystem", 00:08:57.748 "trtype": "$TEST_TRANSPORT", 00:08:57.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.748 "adrfam": "ipv4", 00:08:57.748 "trsvcid": "$NVMF_PORT", 00:08:57.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.748 "hdgst": ${hdgst:-false}, 00:08:57.748 "ddgst": ${ddgst:-false} 00:08:57.748 }, 00:08:57.748 "method": "bdev_nvme_attach_controller" 00:08:57.748 } 00:08:57.748 EOF 00:08:57.748 )") 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=829935 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.748 { 00:08:57.748 "params": { 00:08:57.748 "name": "Nvme$subsystem", 00:08:57.748 "trtype": "$TEST_TRANSPORT", 00:08:57.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.748 "adrfam": "ipv4", 00:08:57.748 "trsvcid": "$NVMF_PORT", 00:08:57.748 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.748 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.748 "hdgst": ${hdgst:-false}, 00:08:57.748 "ddgst": ${ddgst:-false} 00:08:57.748 }, 00:08:57.748 "method": "bdev_nvme_attach_controller" 00:08:57.748 } 00:08:57.748 EOF 00:08:57.748 )") 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:57.748 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:57.748 { 00:08:57.748 "params": { 00:08:57.748 "name": "Nvme$subsystem", 00:08:57.748 "trtype": "$TEST_TRANSPORT", 00:08:57.748 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.749 "adrfam": "ipv4", 00:08:57.749 "trsvcid": "$NVMF_PORT", 00:08:57.749 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.749 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.749 "hdgst": ${hdgst:-false}, 00:08:57.749 "ddgst": ${ddgst:-false} 00:08:57.749 }, 00:08:57.749 "method": "bdev_nvme_attach_controller" 00:08:57.749 } 00:08:57.749 EOF 00:08:57.749 )") 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 829928 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.749 "params": { 00:08:57.749 "name": "Nvme1", 00:08:57.749 "trtype": "tcp", 00:08:57.749 "traddr": "10.0.0.2", 00:08:57.749 "adrfam": "ipv4", 00:08:57.749 "trsvcid": "4420", 00:08:57.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.749 "hdgst": false, 00:08:57.749 "ddgst": false 00:08:57.749 }, 00:08:57.749 "method": "bdev_nvme_attach_controller" 00:08:57.749 }' 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.749 "params": { 00:08:57.749 "name": "Nvme1", 00:08:57.749 "trtype": "tcp", 00:08:57.749 "traddr": "10.0.0.2", 00:08:57.749 "adrfam": "ipv4", 00:08:57.749 "trsvcid": "4420", 00:08:57.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.749 "hdgst": false, 00:08:57.749 "ddgst": false 00:08:57.749 }, 00:08:57.749 "method": "bdev_nvme_attach_controller" 00:08:57.749 }' 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.749 "params": { 00:08:57.749 "name": "Nvme1", 00:08:57.749 "trtype": "tcp", 00:08:57.749 "traddr": "10.0.0.2", 00:08:57.749 "adrfam": "ipv4", 00:08:57.749 "trsvcid": "4420", 00:08:57.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.749 "hdgst": false, 00:08:57.749 "ddgst": false 00:08:57.749 }, 00:08:57.749 "method": "bdev_nvme_attach_controller" 00:08:57.749 }' 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:57.749 02:30:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:57.749 "params": { 00:08:57.749 "name": "Nvme1", 00:08:57.749 "trtype": "tcp", 00:08:57.749 "traddr": "10.0.0.2", 00:08:57.749 "adrfam": "ipv4", 00:08:57.749 "trsvcid": "4420", 00:08:57.749 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.749 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.749 "hdgst": false, 00:08:57.749 "ddgst": false 00:08:57.749 }, 00:08:57.749 "method": "bdev_nvme_attach_controller" 00:08:57.749 }' 00:08:57.749 [2024-12-16 02:30:28.204216] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:57.749 [2024-12-16 02:30:28.204262] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:57.749 [2024-12-16 02:30:28.207076] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:57.749 [2024-12-16 02:30:28.207079] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:57.749 [2024-12-16 02:30:28.207124] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-16 02:30:28.207124] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:57.749 --proc-type=auto ] 00:08:57.749 [2024-12-16 02:30:28.211717] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:57.749 [2024-12-16 02:30:28.211761] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:57.749 [2024-12-16 02:30:28.402157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.007 [2024-12-16 02:30:28.419779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:58.007 [2024-12-16 02:30:28.499013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.007 [2024-12-16 02:30:28.515934] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:58.007 [2024-12-16 02:30:28.570028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.007 [2024-12-16 02:30:28.586744] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:58.007 [2024-12-16 02:30:28.620609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.007 [2024-12-16 02:30:28.636494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:58.265 Running I/O for 1 seconds... 00:08:58.265 Running I/O for 1 seconds... 00:08:58.265 Running I/O for 1 seconds... 00:08:58.523 Running I/O for 1 seconds... 00:08:59.089 243936.00 IOPS, 952.88 MiB/s [2024-12-16T01:30:29.748Z] 13866.00 IOPS, 54.16 MiB/s [2024-12-16T01:30:29.748Z] 10408.00 IOPS, 40.66 MiB/s 00:08:59.089 Latency(us) 00:08:59.089 [2024-12-16T01:30:29.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.089 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:59.089 Nvme1n1 : 1.00 243568.90 951.44 0.00 0.00 522.98 219.43 1482.36 00:08:59.089 [2024-12-16T01:30:29.748Z] =================================================================================================================== 00:08:59.089 [2024-12-16T01:30:29.748Z] Total : 243568.90 951.44 0.00 0.00 522.98 219.43 1482.36 00:08:59.089 00:08:59.089 Latency(us) 00:08:59.089 [2024-12-16T01:30:29.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.089 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:59.089 Nvme1n1 : 1.01 13925.43 54.40 0.00 0.00 9165.50 4337.86 18724.57 00:08:59.089 [2024-12-16T01:30:29.748Z] =================================================================================================================== 00:08:59.089 [2024-12-16T01:30:29.748Z] Total : 13925.43 54.40 0.00 0.00 9165.50 4337.86 18724.57 00:08:59.089 00:08:59.089 Latency(us) 00:08:59.089 [2024-12-16T01:30:29.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.089 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:59.089 Nvme1n1 : 1.01 10465.74 40.88 0.00 0.00 12184.01 6116.69 21221.18 00:08:59.089 [2024-12-16T01:30:29.748Z] =================================================================================================================== 00:08:59.089 [2024-12-16T01:30:29.748Z] Total : 10465.74 40.88 0.00 0.00 12184.01 6116.69 21221.18 00:08:59.347 02:30:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 829930 00:08:59.347 10795.00 IOPS, 42.17 MiB/s 00:08:59.347 Latency(us) 00:08:59.347 [2024-12-16T01:30:30.006Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:59.347 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:59.347 Nvme1n1 : 1.01 10863.89 42.44 0.00 0.00 11751.32 4493.90 24466.77 00:08:59.347 [2024-12-16T01:30:30.006Z] =================================================================================================================== 00:08:59.347 [2024-12-16T01:30:30.006Z] Total : 10863.89 42.44 0.00 0.00 11751.32 4493.90 24466.77 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 829932 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 829935 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:59.607 rmmod nvme_tcp 00:08:59.607 rmmod nvme_fabrics 00:08:59.607 rmmod nvme_keyring 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 829901 ']' 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 829901 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 829901 ']' 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 829901 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 829901 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 829901' 00:08:59.607 killing process with pid 829901 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 829901 00:08:59.607 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 829901 00:08:59.866 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:59.866 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:59.866 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:59.866 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:59.866 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:59.866 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:59.866 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:59.866 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:59.866 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:59.866 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.866 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.866 02:30:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:01.771 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:02.030 00:09:02.030 real 0m10.873s 00:09:02.030 user 0m16.171s 00:09:02.030 sys 0m6.286s 00:09:02.030 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:02.030 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:02.030 ************************************ 00:09:02.030 END TEST nvmf_bdev_io_wait 00:09:02.030 ************************************ 00:09:02.030 02:30:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:02.030 02:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:02.030 02:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.030 02:30:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:02.030 ************************************ 00:09:02.030 START TEST nvmf_queue_depth 00:09:02.030 ************************************ 00:09:02.030 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:02.030 * Looking for test storage... 00:09:02.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.030 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:02.030 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:02.030 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:02.030 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:02.031 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:02.290 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.290 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:02.290 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.290 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.290 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.290 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:02.290 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.290 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:02.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.290 --rc genhtml_branch_coverage=1 00:09:02.290 --rc genhtml_function_coverage=1 00:09:02.291 --rc genhtml_legend=1 00:09:02.291 --rc geninfo_all_blocks=1 00:09:02.291 --rc geninfo_unexecuted_blocks=1 00:09:02.291 00:09:02.291 ' 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:02.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.291 --rc genhtml_branch_coverage=1 00:09:02.291 --rc genhtml_function_coverage=1 00:09:02.291 --rc genhtml_legend=1 00:09:02.291 --rc geninfo_all_blocks=1 00:09:02.291 --rc geninfo_unexecuted_blocks=1 00:09:02.291 00:09:02.291 ' 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:02.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.291 --rc genhtml_branch_coverage=1 00:09:02.291 --rc genhtml_function_coverage=1 00:09:02.291 --rc genhtml_legend=1 00:09:02.291 --rc geninfo_all_blocks=1 00:09:02.291 --rc geninfo_unexecuted_blocks=1 00:09:02.291 00:09:02.291 ' 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:02.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.291 --rc genhtml_branch_coverage=1 00:09:02.291 --rc genhtml_function_coverage=1 00:09:02.291 --rc genhtml_legend=1 00:09:02.291 --rc geninfo_all_blocks=1 00:09:02.291 --rc geninfo_unexecuted_blocks=1 00:09:02.291 00:09:02.291 ' 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:02.291 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:02.291 02:30:32 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:08.862 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:08.862 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:08.862 Found net devices under 0000:af:00.0: cvl_0_0 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:08.862 Found net devices under 0000:af:00.1: cvl_0_1 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.862 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:09:08.863 00:09:08.863 --- 10.0.0.2 ping statistics --- 00:09:08.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.863 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:09:08.863 00:09:08.863 --- 10.0.0.1 ping statistics --- 00:09:08.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.863 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=833867 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 833867 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 833867 ']' 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.863 [2024-12-16 02:30:38.783958] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:08.863 [2024-12-16 02:30:38.784002] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.863 [2024-12-16 02:30:38.845904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.863 [2024-12-16 02:30:38.867015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:08.863 [2024-12-16 02:30:38.867050] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:08.863 [2024-12-16 02:30:38.867056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:08.863 [2024-12-16 02:30:38.867062] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:08.863 [2024-12-16 02:30:38.867067] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:08.863 [2024-12-16 02:30:38.867523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.863 02:30:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.863 [2024-12-16 02:30:38.998302] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.863 Malloc0 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.863 [2024-12-16 02:30:39.048333] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=833890 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 833890 /var/tmp/bdevperf.sock 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 833890 ']' 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:08.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:08.863 [2024-12-16 02:30:39.099976] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:08.863 [2024-12-16 02:30:39.100018] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid833890 ] 00:09:08.863 [2024-12-16 02:30:39.175192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.863 [2024-12-16 02:30:39.197298] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.863 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:09.123 NVMe0n1 00:09:09.123 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.123 02:30:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:09.123 Running I/O for 10 seconds... 00:09:11.436 11791.00 IOPS, 46.06 MiB/s [2024-12-16T01:30:43.030Z] 12236.50 IOPS, 47.80 MiB/s [2024-12-16T01:30:43.966Z] 12283.33 IOPS, 47.98 MiB/s [2024-12-16T01:30:44.708Z] 12477.75 IOPS, 48.74 MiB/s [2024-12-16T01:30:46.083Z] 12474.80 IOPS, 48.73 MiB/s [2024-12-16T01:30:47.018Z] 12495.83 IOPS, 48.81 MiB/s [2024-12-16T01:30:47.687Z] 12556.86 IOPS, 49.05 MiB/s [2024-12-16T01:30:49.062Z] 12545.12 IOPS, 49.00 MiB/s [2024-12-16T01:30:49.998Z] 12587.11 IOPS, 49.17 MiB/s [2024-12-16T01:30:49.998Z] 12573.00 IOPS, 49.11 MiB/s 00:09:19.339 Latency(us) 00:09:19.339 [2024-12-16T01:30:49.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.339 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:19.339 Verification LBA range: start 0x0 length 0x4000 00:09:19.339 NVMe0n1 : 10.06 12600.93 49.22 0.00 0.00 81009.82 20721.86 50681.17 00:09:19.339 [2024-12-16T01:30:49.998Z] =================================================================================================================== 00:09:19.339 [2024-12-16T01:30:49.998Z] Total : 12600.93 49.22 0.00 0.00 81009.82 20721.86 50681.17 00:09:19.339 { 00:09:19.339 "results": [ 00:09:19.339 { 00:09:19.339 "job": "NVMe0n1", 00:09:19.339 "core_mask": "0x1", 00:09:19.339 "workload": "verify", 00:09:19.339 "status": "finished", 00:09:19.339 "verify_range": { 00:09:19.339 "start": 0, 00:09:19.339 "length": 16384 00:09:19.339 }, 00:09:19.339 "queue_depth": 1024, 00:09:19.339 "io_size": 4096, 00:09:19.339 "runtime": 10.058304, 00:09:19.339 "iops": 12600.9315288144, 00:09:19.339 "mibps": 49.22238878443125, 00:09:19.339 "io_failed": 0, 00:09:19.339 "io_timeout": 0, 00:09:19.339 "avg_latency_us": 81009.82357574174, 00:09:19.339 "min_latency_us": 20721.859047619047, 00:09:19.339 "max_latency_us": 50681.17333333333 00:09:19.339 } 00:09:19.339 ], 00:09:19.339 "core_count": 1 00:09:19.339 } 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 833890 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 833890 ']' 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 833890 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 833890 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 833890' 00:09:19.339 killing process with pid 833890 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 833890 00:09:19.339 Received shutdown signal, test time was about 10.000000 seconds 00:09:19.339 00:09:19.339 Latency(us) 00:09:19.339 [2024-12-16T01:30:49.998Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.339 [2024-12-16T01:30:49.998Z] =================================================================================================================== 00:09:19.339 [2024-12-16T01:30:49.998Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 833890 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.339 02:30:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.339 rmmod nvme_tcp 00:09:19.597 rmmod nvme_fabrics 00:09:19.597 rmmod nvme_keyring 00:09:19.597 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.597 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:19.597 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:19.597 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 833867 ']' 00:09:19.597 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 833867 00:09:19.597 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 833867 ']' 00:09:19.597 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 833867 00:09:19.597 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:19.597 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:19.598 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 833867 00:09:19.598 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:19.598 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:19.598 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 833867' 00:09:19.598 killing process with pid 833867 00:09:19.598 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 833867 00:09:19.598 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 833867 00:09:19.856 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.856 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.856 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.856 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:19.856 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:19.856 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.856 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.856 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.856 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.856 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.856 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.856 02:30:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.759 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.759 00:09:21.759 real 0m19.836s 00:09:21.759 user 0m23.262s 00:09:21.759 sys 0m6.099s 00:09:21.759 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.759 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:21.759 ************************************ 00:09:21.759 END TEST nvmf_queue_depth 00:09:21.759 ************************************ 00:09:21.759 02:30:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:21.759 02:30:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:21.759 02:30:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.759 02:30:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.019 ************************************ 00:09:22.019 START TEST nvmf_target_multipath 00:09:22.019 ************************************ 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:22.019 * Looking for test storage... 00:09:22.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:22.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.019 --rc genhtml_branch_coverage=1 00:09:22.019 --rc genhtml_function_coverage=1 00:09:22.019 --rc genhtml_legend=1 00:09:22.019 --rc geninfo_all_blocks=1 00:09:22.019 --rc geninfo_unexecuted_blocks=1 00:09:22.019 00:09:22.019 ' 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:22.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.019 --rc genhtml_branch_coverage=1 00:09:22.019 --rc genhtml_function_coverage=1 00:09:22.019 --rc genhtml_legend=1 00:09:22.019 --rc geninfo_all_blocks=1 00:09:22.019 --rc geninfo_unexecuted_blocks=1 00:09:22.019 00:09:22.019 ' 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:22.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.019 --rc genhtml_branch_coverage=1 00:09:22.019 --rc genhtml_function_coverage=1 00:09:22.019 --rc genhtml_legend=1 00:09:22.019 --rc geninfo_all_blocks=1 00:09:22.019 --rc geninfo_unexecuted_blocks=1 00:09:22.019 00:09:22.019 ' 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:22.019 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.019 --rc genhtml_branch_coverage=1 00:09:22.019 --rc genhtml_function_coverage=1 00:09:22.019 --rc genhtml_legend=1 00:09:22.019 --rc geninfo_all_blocks=1 00:09:22.019 --rc geninfo_unexecuted_blocks=1 00:09:22.019 00:09:22.019 ' 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:22.019 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:22.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:22.020 02:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:28.589 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:28.589 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:28.589 Found net devices under 0000:af:00.0: cvl_0_0 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:28.589 Found net devices under 0000:af:00.1: cvl_0_1 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:28.589 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:28.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:09:28.590 00:09:28.590 --- 10.0.0.2 ping statistics --- 00:09:28.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.590 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:28.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:09:28.590 00:09:28.590 --- 10.0.0.1 ping statistics --- 00:09:28.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.590 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:28.590 only one NIC for nvmf test 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:28.590 rmmod nvme_tcp 00:09:28.590 rmmod nvme_fabrics 00:09:28.590 rmmod nvme_keyring 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.590 02:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:30.496 00:09:30.496 real 0m8.346s 00:09:30.496 user 0m1.895s 00:09:30.496 sys 0m4.450s 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:30.496 ************************************ 00:09:30.496 END TEST nvmf_target_multipath 00:09:30.496 ************************************ 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.496 ************************************ 00:09:30.496 START TEST nvmf_zcopy 00:09:30.496 ************************************ 00:09:30.496 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:30.496 * Looking for test storage... 00:09:30.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.497 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.497 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.497 02:31:00 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.497 --rc genhtml_branch_coverage=1 00:09:30.497 --rc genhtml_function_coverage=1 00:09:30.497 --rc genhtml_legend=1 00:09:30.497 --rc geninfo_all_blocks=1 00:09:30.497 --rc geninfo_unexecuted_blocks=1 00:09:30.497 00:09:30.497 ' 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.497 --rc genhtml_branch_coverage=1 00:09:30.497 --rc genhtml_function_coverage=1 00:09:30.497 --rc genhtml_legend=1 00:09:30.497 --rc geninfo_all_blocks=1 00:09:30.497 --rc geninfo_unexecuted_blocks=1 00:09:30.497 00:09:30.497 ' 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.497 --rc genhtml_branch_coverage=1 00:09:30.497 --rc genhtml_function_coverage=1 00:09:30.497 --rc genhtml_legend=1 00:09:30.497 --rc geninfo_all_blocks=1 00:09:30.497 --rc geninfo_unexecuted_blocks=1 00:09:30.497 00:09:30.497 ' 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.497 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.497 --rc genhtml_branch_coverage=1 00:09:30.497 --rc genhtml_function_coverage=1 00:09:30.497 --rc genhtml_legend=1 00:09:30.497 --rc geninfo_all_blocks=1 00:09:30.497 --rc geninfo_unexecuted_blocks=1 00:09:30.497 00:09:30.497 ' 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.497 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.498 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:30.498 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:30.498 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:30.498 02:31:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:37.070 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:37.070 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.070 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:37.070 Found net devices under 0000:af:00.0: cvl_0_0 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:37.071 Found net devices under 0000:af:00.1: cvl_0_1 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:37.071 02:31:06 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:37.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:09:37.071 00:09:37.071 --- 10.0.0.2 ping statistics --- 00:09:37.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.071 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:37.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:09:37.071 00:09:37.071 --- 10.0.0.1 ping statistics --- 00:09:37.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.071 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=842833 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 842833 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 842833 ']' 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.071 [2024-12-16 02:31:07.128655] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:37.071 [2024-12-16 02:31:07.128707] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:37.071 [2024-12-16 02:31:07.207259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.071 [2024-12-16 02:31:07.227600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.071 [2024-12-16 02:31:07.227631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.071 [2024-12-16 02:31:07.227639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:37.071 [2024-12-16 02:31:07.227645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:37.071 [2024-12-16 02:31:07.227649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.071 [2024-12-16 02:31:07.228118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.071 [2024-12-16 02:31:07.369966] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.071 [2024-12-16 02:31:07.390164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.071 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.072 malloc0 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:37.072 { 00:09:37.072 "params": { 00:09:37.072 "name": "Nvme$subsystem", 00:09:37.072 "trtype": "$TEST_TRANSPORT", 00:09:37.072 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:37.072 "adrfam": "ipv4", 00:09:37.072 "trsvcid": "$NVMF_PORT", 00:09:37.072 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:37.072 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:37.072 "hdgst": ${hdgst:-false}, 00:09:37.072 "ddgst": ${ddgst:-false} 00:09:37.072 }, 00:09:37.072 "method": "bdev_nvme_attach_controller" 00:09:37.072 } 00:09:37.072 EOF 00:09:37.072 )") 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:37.072 02:31:07 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:37.072 "params": { 00:09:37.072 "name": "Nvme1", 00:09:37.072 "trtype": "tcp", 00:09:37.072 "traddr": "10.0.0.2", 00:09:37.072 "adrfam": "ipv4", 00:09:37.072 "trsvcid": "4420", 00:09:37.072 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:37.072 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:37.072 "hdgst": false, 00:09:37.072 "ddgst": false 00:09:37.072 }, 00:09:37.072 "method": "bdev_nvme_attach_controller" 00:09:37.072 }' 00:09:37.072 [2024-12-16 02:31:07.471903] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:37.072 [2024-12-16 02:31:07.471943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842854 ] 00:09:37.072 [2024-12-16 02:31:07.543889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.072 [2024-12-16 02:31:07.566437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.331 Running I/O for 10 seconds... 00:09:39.204 8676.00 IOPS, 67.78 MiB/s [2024-12-16T01:31:11.242Z] 8767.50 IOPS, 68.50 MiB/s [2024-12-16T01:31:12.187Z] 8735.00 IOPS, 68.24 MiB/s [2024-12-16T01:31:13.127Z] 8774.00 IOPS, 68.55 MiB/s [2024-12-16T01:31:14.065Z] 8801.60 IOPS, 68.76 MiB/s [2024-12-16T01:31:15.004Z] 8815.00 IOPS, 68.87 MiB/s [2024-12-16T01:31:15.942Z] 8829.00 IOPS, 68.98 MiB/s [2024-12-16T01:31:16.881Z] 8842.00 IOPS, 69.08 MiB/s [2024-12-16T01:31:18.262Z] 8848.78 IOPS, 69.13 MiB/s [2024-12-16T01:31:18.262Z] 8853.00 IOPS, 69.16 MiB/s 00:09:47.603 Latency(us) 00:09:47.603 [2024-12-16T01:31:18.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.603 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:47.603 Verification LBA range: start 0x0 length 0x1000 00:09:47.603 Nvme1n1 : 10.01 8855.23 69.18 0.00 0.00 14413.37 2278.16 25090.93 00:09:47.603 [2024-12-16T01:31:18.262Z] =================================================================================================================== 00:09:47.603 [2024-12-16T01:31:18.262Z] Total : 8855.23 69.18 0.00 0.00 14413.37 2278.16 25090.93 00:09:47.603 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=844642 00:09:47.603 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:47.603 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.603 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:47.603 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:47.603 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:47.603 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:47.603 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:47.603 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:47.603 { 00:09:47.603 "params": { 00:09:47.603 "name": "Nvme$subsystem", 00:09:47.603 "trtype": "$TEST_TRANSPORT", 00:09:47.603 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:47.603 "adrfam": "ipv4", 00:09:47.603 "trsvcid": "$NVMF_PORT", 00:09:47.603 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:47.603 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:47.603 "hdgst": ${hdgst:-false}, 00:09:47.603 "ddgst": ${ddgst:-false} 00:09:47.603 }, 00:09:47.603 "method": "bdev_nvme_attach_controller" 00:09:47.603 } 00:09:47.603 EOF 00:09:47.603 )") 00:09:47.603 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:47.603 [2024-12-16 02:31:18.036355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.603 [2024-12-16 02:31:18.036392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.603 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:47.603 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:47.603 02:31:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:47.603 "params": { 00:09:47.603 "name": "Nvme1", 00:09:47.603 "trtype": "tcp", 00:09:47.603 "traddr": "10.0.0.2", 00:09:47.603 "adrfam": "ipv4", 00:09:47.603 "trsvcid": "4420", 00:09:47.603 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:47.603 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:47.603 "hdgst": false, 00:09:47.603 "ddgst": false 00:09:47.603 }, 00:09:47.603 "method": "bdev_nvme_attach_controller" 00:09:47.603 }' 00:09:47.603 [2024-12-16 02:31:18.048355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.603 [2024-12-16 02:31:18.048368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.603 [2024-12-16 02:31:18.060383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.603 [2024-12-16 02:31:18.060394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.072412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.072422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.077225] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:47.604 [2024-12-16 02:31:18.077268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844642 ] 00:09:47.604 [2024-12-16 02:31:18.084447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.084459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.096476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.096487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.108510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.108520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.120543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.120554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.132574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.132583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.144607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.144622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.151272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.604 [2024-12-16 02:31:18.156646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.156661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.168674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.168688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.173608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.604 [2024-12-16 02:31:18.180702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.180714] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.192746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.192769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.204773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.204790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.216800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.216815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.228832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.228850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.240868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.240883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.604 [2024-12-16 02:31:18.252893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.604 [2024-12-16 02:31:18.252903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.264940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.264961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.276960] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.276974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.288998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.289013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.301028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.301042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.313056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.313067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.325086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.325096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.337118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.337130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.349154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.349167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.361183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.361197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.373215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.373225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.385252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.385265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.397281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.397291] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.409319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.409329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.421346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.421356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.433383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.433394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.445424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.445442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 Running I/O for 5 seconds... 00:09:47.864 [2024-12-16 02:31:18.457452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.457468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.468985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.469004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.482787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.482806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.496729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.496749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.505467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.505486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.864 [2024-12-16 02:31:18.519944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.864 [2024-12-16 02:31:18.519964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.533606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.533625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.547254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.547273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.560827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.560845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.574432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.574450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.587956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.587974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.596554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.596577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.605789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.605807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.615027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.615045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.624194] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.624212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.638523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.638542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.651994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.652013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.665709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.665728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.678971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.678990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.692604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.692623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.706589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.706609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.719966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.719986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.733589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.733608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.746983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.747002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.760462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.760482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.124 [2024-12-16 02:31:18.774234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.124 [2024-12-16 02:31:18.774259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.787797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.787819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.801060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.801080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.815028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.815049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.825586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.825605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.839359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.839380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.852930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.852950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.866639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.866659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.880140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.880161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.893655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.893674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.907080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.907100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.920434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.920453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.934266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.934285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.948323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.948343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.962111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.962131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.976040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.976059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:18.989931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:18.989951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:19.003562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:19.003584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:19.017249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:19.017269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.384 [2024-12-16 02:31:19.031056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.384 [2024-12-16 02:31:19.031076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.644 [2024-12-16 02:31:19.044921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.644 [2024-12-16 02:31:19.044942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.644 [2024-12-16 02:31:19.058680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.644 [2024-12-16 02:31:19.058699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.644 [2024-12-16 02:31:19.067380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.644 [2024-12-16 02:31:19.067399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.644 [2024-12-16 02:31:19.076657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.644 [2024-12-16 02:31:19.076677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.644 [2024-12-16 02:31:19.090558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.644 [2024-12-16 02:31:19.090579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.644 [2024-12-16 02:31:19.104116] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.644 [2024-12-16 02:31:19.104136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.644 [2024-12-16 02:31:19.117511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.644 [2024-12-16 02:31:19.117532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.644 [2024-12-16 02:31:19.126373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.644 [2024-12-16 02:31:19.126397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.644 [2024-12-16 02:31:19.140407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.644 [2024-12-16 02:31:19.140426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.644 [2024-12-16 02:31:19.149218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-16 02:31:19.149238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-16 02:31:19.163639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-16 02:31:19.163659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-16 02:31:19.172387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-16 02:31:19.172406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-16 02:31:19.181588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-16 02:31:19.181607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-16 02:31:19.195773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-16 02:31:19.195792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-16 02:31:19.209174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-16 02:31:19.209194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-16 02:31:19.222861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-16 02:31:19.222881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-16 02:31:19.236279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-16 02:31:19.236298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-16 02:31:19.249526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-16 02:31:19.249546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-16 02:31:19.258439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-16 02:31:19.258458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-16 02:31:19.272617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-16 02:31:19.272636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-16 02:31:19.286422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-16 02:31:19.286440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.645 [2024-12-16 02:31:19.300161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.645 [2024-12-16 02:31:19.300180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.313612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.313631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.327818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.327837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.341748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.341767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.355008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.355027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.368521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.368540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.382145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.382167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.396186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.396204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.409227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.409247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.422680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.422698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.436619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.436637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.445361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.445380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 17124.00 IOPS, 133.78 MiB/s [2024-12-16T01:31:19.564Z] [2024-12-16 02:31:19.454596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.454615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.463625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.463643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.478154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.478174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.491826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.491852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.505160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.505178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.514029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.514058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.527920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.527939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.540984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.541003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:48.905 [2024-12-16 02:31:19.554437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:48.905 [2024-12-16 02:31:19.554466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.567558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.567578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.581258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.581276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.594714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.594733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.608201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.608219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.621878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.621898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.635453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.635471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.648961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.648980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.662323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.662341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.676164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.676187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.685165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.685194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.699359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.699378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.713435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.713454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.727060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.727080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.740578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.740597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.753978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.753998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.767161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.767181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.780268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.780288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.793450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.181 [2024-12-16 02:31:19.793470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.181 [2024-12-16 02:31:19.807462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.182 [2024-12-16 02:31:19.807487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.182 [2024-12-16 02:31:19.816547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.182 [2024-12-16 02:31:19.816566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.182 [2024-12-16 02:31:19.830254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.182 [2024-12-16 02:31:19.830273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:19.843737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:19.843756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:19.857209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:19.857228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:19.870726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:19.870745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:19.879453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:19.879472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:19.893133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:19.893151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:19.906519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:19.906539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:19.920303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:19.920322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:19.933618] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:19.933637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:19.947051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:19.947070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:19.959996] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:19.960015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:19.973412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:19.973431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:19.987168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:19.987188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:19.998127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:19.998147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:20.012956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:20.012975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:20.028669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:20.028689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:20.038006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:20.038025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:20.047338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:20.047361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:20.062633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:20.062654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:20.078391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:20.078412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.442 [2024-12-16 02:31:20.092296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.442 [2024-12-16 02:31:20.092317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.105723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.105744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.114477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.114496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.128679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.128698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.142045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.142064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.155924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.155943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.169958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.169977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.178833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.178862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.193196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.193215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.207130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.207151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.220646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.220666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.234767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.234787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.245674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.245693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.259413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.259431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.273056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.273075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.282037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.282056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.291966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.291990] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.306027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.702 [2024-12-16 02:31:20.306047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.702 [2024-12-16 02:31:20.319657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.703 [2024-12-16 02:31:20.319677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.703 [2024-12-16 02:31:20.328993] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.703 [2024-12-16 02:31:20.329012] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.703 [2024-12-16 02:31:20.343394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.703 [2024-12-16 02:31:20.343414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.703 [2024-12-16 02:31:20.356839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.703 [2024-12-16 02:31:20.356866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.962 [2024-12-16 02:31:20.370791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.962 [2024-12-16 02:31:20.370813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.962 [2024-12-16 02:31:20.384392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.962 [2024-12-16 02:31:20.384412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.962 [2024-12-16 02:31:20.398382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.962 [2024-12-16 02:31:20.398402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.412103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.412123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.425625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.425645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.439469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.439489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.453394] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.453414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 17142.00 IOPS, 133.92 MiB/s [2024-12-16T01:31:20.622Z] [2024-12-16 02:31:20.467058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.467077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.481317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.481337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.492314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.492335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.502168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.502188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.516329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.516350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.530376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.530396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.544458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.544477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.558265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.558294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.572063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.572083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.585940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.585960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.599658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.599677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:49.963 [2024-12-16 02:31:20.613064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:49.963 [2024-12-16 02:31:20.613084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.626945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.626964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.640275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.640294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.653984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.654003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.667604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.667622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.681625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.681644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.694969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.694988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.703955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.703973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.717822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.717841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.731384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.731404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.745324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.745343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.759180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.759199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.768005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.768024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.782407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.782425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.795973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.795992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.809745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.809763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.823375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.823395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.837014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.837034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.850387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.850408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.864274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.864293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.223 [2024-12-16 02:31:20.877719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.223 [2024-12-16 02:31:20.877738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:20.886598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:20.886617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:20.900689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:20.900707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:20.913975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:20.913994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:20.923530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:20.923549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:20.937656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:20.937675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:20.946584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:20.946604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:20.960553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:20.960572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:20.974501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:20.974519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:20.988827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:20.988851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:20.999859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:20.999878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:21.014012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:21.014032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:21.027566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:21.027584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:21.041065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:21.041084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:21.050335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:21.050354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:21.060022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:21.060040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:21.074253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:21.074274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:21.087333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:21.087352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:21.101301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:21.101321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:21.114669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:21.114688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:21.124250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:21.124268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.483 [2024-12-16 02:31:21.138190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.483 [2024-12-16 02:31:21.138210] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.743 [2024-12-16 02:31:21.152038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.743 [2024-12-16 02:31:21.152057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.743 [2024-12-16 02:31:21.165682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.743 [2024-12-16 02:31:21.165701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.743 [2024-12-16 02:31:21.179631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.743 [2024-12-16 02:31:21.179650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.743 [2024-12-16 02:31:21.193677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.743 [2024-12-16 02:31:21.193696] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.743 [2024-12-16 02:31:21.207330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.743 [2024-12-16 02:31:21.207349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.743 [2024-12-16 02:31:21.216425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.743 [2024-12-16 02:31:21.216444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.743 [2024-12-16 02:31:21.231299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.744 [2024-12-16 02:31:21.231318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.744 [2024-12-16 02:31:21.246412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.744 [2024-12-16 02:31:21.246432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.744 [2024-12-16 02:31:21.259552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.744 [2024-12-16 02:31:21.259572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.744 [2024-12-16 02:31:21.273201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.744 [2024-12-16 02:31:21.273223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.744 [2024-12-16 02:31:21.286682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.744 [2024-12-16 02:31:21.286701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.744 [2024-12-16 02:31:21.300175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.744 [2024-12-16 02:31:21.300194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.744 [2024-12-16 02:31:21.313492] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.744 [2024-12-16 02:31:21.313511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.744 [2024-12-16 02:31:21.326973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.744 [2024-12-16 02:31:21.326992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.744 [2024-12-16 02:31:21.340454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.744 [2024-12-16 02:31:21.340474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.744 [2024-12-16 02:31:21.354070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.744 [2024-12-16 02:31:21.354090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.744 [2024-12-16 02:31:21.367593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.744 [2024-12-16 02:31:21.367612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.744 [2024-12-16 02:31:21.381300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.744 [2024-12-16 02:31:21.381319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:50.744 [2024-12-16 02:31:21.395559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:50.744 [2024-12-16 02:31:21.395577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.411075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.411094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.425017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.425038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.438910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.438930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.452502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.452523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 17145.00 IOPS, 133.95 MiB/s [2024-12-16T01:31:21.663Z] [2024-12-16 02:31:21.466161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.466181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.479895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.479915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.493773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.493793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.507330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.507349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.520835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.520860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.534427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.534451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.547988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.548008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.561551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.561571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.575590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.575611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.589484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.589503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.603230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.603249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.616801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.616821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.630512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.630532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.644008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.644029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.004 [2024-12-16 02:31:21.657640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.004 [2024-12-16 02:31:21.657659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.264 [2024-12-16 02:31:21.666968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.666988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.675752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.675771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.690191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.690211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.703524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.703544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.717036] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.717056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.725764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.725785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.739770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.739792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.753491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.753512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.767668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.767690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.781281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.781306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.790082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.790101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.803857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.803878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.817048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.817069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.830965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.830985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.844517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.844538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.858295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.858316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.867232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.867251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.881529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.881548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.894974] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.894994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.908833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.908857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.265 [2024-12-16 02:31:21.922500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.265 [2024-12-16 02:31:21.922519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:21.931340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:21.931359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:21.945507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:21.945527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:21.959157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:21.959176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:21.972638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:21.972657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:21.986705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:21.986725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:21.995631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:21.995650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:22.010355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:22.010374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:22.021439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:22.021458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:22.036142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:22.036161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:22.050060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:22.050080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:22.063868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:22.063888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:22.077391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:22.077410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:22.091188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:22.091208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:22.100274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:22.100293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:22.114454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:22.114473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:22.128454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:22.128473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:22.143721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:22.143740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:22.157233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:22.157253] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.525 [2024-12-16 02:31:22.170797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.525 [2024-12-16 02:31:22.170817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.784 [2024-12-16 02:31:22.184218] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.184238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.198061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.198081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.211404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.211424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.225144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.225163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.238820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.238839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.252305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.252323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.265691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.265710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.279621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.279640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.293173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.293192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.306588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.306609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.316440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.316459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.330388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.330407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.343836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.343861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.357693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.357712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.371374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.371392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.385129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.385148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.398919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.398940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.412080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.412100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.425756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.425776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:51.785 [2024-12-16 02:31:22.439752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:51.785 [2024-12-16 02:31:22.439772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.044 [2024-12-16 02:31:22.453347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.453366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 17156.50 IOPS, 134.04 MiB/s [2024-12-16T01:31:22.704Z] [2024-12-16 02:31:22.467247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.467266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.480751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.480769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.494439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.494458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.508040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.508059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.520978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.521002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.534509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.534528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.548727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.548746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.562311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.562331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.575927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.575948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.584578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.584597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.593596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.593615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.603290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.603309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.617322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.617341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.630895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.630913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.644474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.644493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.657986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.658005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.667726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.667745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.681869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.681889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.045 [2024-12-16 02:31:22.695456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.045 [2024-12-16 02:31:22.695475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.708921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.708939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.722063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.722082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.735926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.735945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.749528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.749548] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.763451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.763479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.772678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.772698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.786634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.786654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.799843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.799867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.813918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.813939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.823337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.823357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.838155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.838175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.848919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.848937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.858261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.858279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.873415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.873435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.887382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.887403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.901133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.901153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.910184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.910204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.925091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.925123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.936484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.936504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.945432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.945451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.305 [2024-12-16 02:31:22.959636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.305 [2024-12-16 02:31:22.959655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:22.973456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:22.973476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:22.986853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:22.986873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.000139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.000164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.013565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.013585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.026924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.026944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.040681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.040700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.054424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.054444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.068131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.068152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.081530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.081550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.090298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.090320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.104315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.104335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.117627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.117647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.131284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.131303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.144553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.144572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.158302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.158322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.167501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.167521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.181305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.181325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.195381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.195402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.209272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.209293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.565 [2024-12-16 02:31:23.222825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.565 [2024-12-16 02:31:23.222853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.826 [2024-12-16 02:31:23.236758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.826 [2024-12-16 02:31:23.236779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.826 [2024-12-16 02:31:23.250435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.826 [2024-12-16 02:31:23.250465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.826 [2024-12-16 02:31:23.264309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.826 [2024-12-16 02:31:23.264329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.826 [2024-12-16 02:31:23.278008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.826 [2024-12-16 02:31:23.278028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.826 [2024-12-16 02:31:23.291572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.826 [2024-12-16 02:31:23.291592] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.826 [2024-12-16 02:31:23.305372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.826 [2024-12-16 02:31:23.305390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.826 [2024-12-16 02:31:23.314206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.826 [2024-12-16 02:31:23.314225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.826 [2024-12-16 02:31:23.328344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.826 [2024-12-16 02:31:23.328362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.826 [2024-12-16 02:31:23.341639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.826 [2024-12-16 02:31:23.341657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.826 [2024-12-16 02:31:23.355390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.826 [2024-12-16 02:31:23.355409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.826 [2024-12-16 02:31:23.368419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.827 [2024-12-16 02:31:23.368439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.827 [2024-12-16 02:31:23.382209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.827 [2024-12-16 02:31:23.382229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.827 [2024-12-16 02:31:23.395958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.827 [2024-12-16 02:31:23.395977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.827 [2024-12-16 02:31:23.409380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.827 [2024-12-16 02:31:23.409399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.827 [2024-12-16 02:31:23.423149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.827 [2024-12-16 02:31:23.423168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.827 [2024-12-16 02:31:23.436771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.827 [2024-12-16 02:31:23.436791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.827 [2024-12-16 02:31:23.450644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.827 [2024-12-16 02:31:23.450663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.827 17158.80 IOPS, 134.05 MiB/s [2024-12-16T01:31:23.486Z] [2024-12-16 02:31:23.463901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.827 [2024-12-16 02:31:23.463920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:52.827 00:09:52.827 Latency(us) 00:09:52.827 [2024-12-16T01:31:23.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:52.827 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:52.827 Nvme1n1 : 5.01 17161.25 134.07 0.00 0.00 7451.61 3339.22 17725.93 00:09:52.827 [2024-12-16T01:31:23.486Z] =================================================================================================================== 00:09:52.827 [2024-12-16T01:31:23.486Z] Total : 17161.25 134.07 0.00 0.00 7451.61 3339.22 17725.93 00:09:52.827 [2024-12-16 02:31:23.473559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:52.827 [2024-12-16 02:31:23.473577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.087 [2024-12-16 02:31:23.485590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.088 [2024-12-16 02:31:23.485606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.088 [2024-12-16 02:31:23.497633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.088 [2024-12-16 02:31:23.497653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.088 [2024-12-16 02:31:23.509660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.088 [2024-12-16 02:31:23.509681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.088 [2024-12-16 02:31:23.521689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.088 [2024-12-16 02:31:23.521707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.088 [2024-12-16 02:31:23.533717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.088 [2024-12-16 02:31:23.533733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.088 [2024-12-16 02:31:23.545752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.088 [2024-12-16 02:31:23.545768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.088 [2024-12-16 02:31:23.557783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.088 [2024-12-16 02:31:23.557800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.088 [2024-12-16 02:31:23.569814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.088 [2024-12-16 02:31:23.569829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.088 [2024-12-16 02:31:23.581842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.088 [2024-12-16 02:31:23.581857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.088 [2024-12-16 02:31:23.593883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.088 [2024-12-16 02:31:23.593897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.088 [2024-12-16 02:31:23.605910] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.088 [2024-12-16 02:31:23.605925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.088 [2024-12-16 02:31:23.617941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.088 [2024-12-16 02:31:23.617951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.088 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (844642) - No such process 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 844642 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.088 delay0 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.088 02:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:53.348 [2024-12-16 02:31:23.769432] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:59.934 Initializing NVMe Controllers 00:09:59.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:59.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:59.935 Initialization complete. Launching workers. 00:09:59.935 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 81 00:09:59.935 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 368, failed to submit 33 00:09:59.935 success 173, unsuccessful 195, failed 0 00:09:59.935 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:59.935 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:59.935 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:59.935 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:59.935 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.935 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:59.935 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.935 02:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.935 rmmod nvme_tcp 00:09:59.935 rmmod nvme_fabrics 00:09:59.935 rmmod nvme_keyring 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 842833 ']' 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 842833 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 842833 ']' 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 842833 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 842833 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 842833' 00:09:59.935 killing process with pid 842833 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 842833 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 842833 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:59.935 02:31:30 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.847 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:01.847 00:10:01.847 real 0m31.508s 00:10:01.847 user 0m42.164s 00:10:01.847 sys 0m11.037s 00:10:01.847 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.847 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.847 ************************************ 00:10:01.847 END TEST nvmf_zcopy 00:10:01.847 ************************************ 00:10:01.847 02:31:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:01.847 02:31:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.847 02:31:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.847 02:31:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:01.847 ************************************ 00:10:01.847 START TEST nvmf_nmic 00:10:01.847 ************************************ 00:10:01.847 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:02.108 * Looking for test storage... 00:10:02.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:02.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.108 --rc genhtml_branch_coverage=1 00:10:02.108 --rc genhtml_function_coverage=1 00:10:02.108 --rc genhtml_legend=1 00:10:02.108 --rc geninfo_all_blocks=1 00:10:02.108 --rc geninfo_unexecuted_blocks=1 00:10:02.108 00:10:02.108 ' 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:02.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.108 --rc genhtml_branch_coverage=1 00:10:02.108 --rc genhtml_function_coverage=1 00:10:02.108 --rc genhtml_legend=1 00:10:02.108 --rc geninfo_all_blocks=1 00:10:02.108 --rc geninfo_unexecuted_blocks=1 00:10:02.108 00:10:02.108 ' 00:10:02.108 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:02.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.108 --rc genhtml_branch_coverage=1 00:10:02.109 --rc genhtml_function_coverage=1 00:10:02.109 --rc genhtml_legend=1 00:10:02.109 --rc geninfo_all_blocks=1 00:10:02.109 --rc geninfo_unexecuted_blocks=1 00:10:02.109 00:10:02.109 ' 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:02.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.109 --rc genhtml_branch_coverage=1 00:10:02.109 --rc genhtml_function_coverage=1 00:10:02.109 --rc genhtml_legend=1 00:10:02.109 --rc geninfo_all_blocks=1 00:10:02.109 --rc geninfo_unexecuted_blocks=1 00:10:02.109 00:10:02.109 ' 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:02.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:02.109 02:31:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:08.689 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:08.690 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:08.690 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:08.690 Found net devices under 0000:af:00.0: cvl_0_0 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:08.690 Found net devices under 0000:af:00.1: cvl_0_1 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:08.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:08.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.483 ms 00:10:08.690 00:10:08.690 --- 10.0.0.2 ping statistics --- 00:10:08.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.690 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:08.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:08.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:10:08.690 00:10:08.690 --- 10.0.0.1 ping statistics --- 00:10:08.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:08.690 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=850124 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 850124 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 850124 ']' 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.690 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:08.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.691 [2024-12-16 02:31:38.647083] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:08.691 [2024-12-16 02:31:38.647133] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:08.691 [2024-12-16 02:31:38.725582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:08.691 [2024-12-16 02:31:38.750685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:08.691 [2024-12-16 02:31:38.750721] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:08.691 [2024-12-16 02:31:38.750729] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:08.691 [2024-12-16 02:31:38.750735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:08.691 [2024-12-16 02:31:38.750740] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:08.691 [2024-12-16 02:31:38.752036] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:08.691 [2024-12-16 02:31:38.752137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.691 [2024-12-16 02:31:38.752246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.691 [2024-12-16 02:31:38.752246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.691 [2024-12-16 02:31:38.896998] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.691 Malloc0 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.691 [2024-12-16 02:31:38.961555] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:08.691 test case1: single bdev can't be used in multiple subsystems 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.691 [2024-12-16 02:31:38.989475] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:08.691 [2024-12-16 02:31:38.989494] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:08.691 [2024-12-16 02:31:38.989501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.691 request: 00:10:08.691 { 00:10:08.691 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:08.691 "namespace": { 00:10:08.691 "bdev_name": "Malloc0", 00:10:08.691 "no_auto_visible": false, 00:10:08.691 "hide_metadata": false 00:10:08.691 }, 00:10:08.691 "method": "nvmf_subsystem_add_ns", 00:10:08.691 "req_id": 1 00:10:08.691 } 00:10:08.691 Got JSON-RPC error response 00:10:08.691 response: 00:10:08.691 { 00:10:08.691 "code": -32602, 00:10:08.691 "message": "Invalid parameters" 00:10:08.691 } 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:08.691 Adding namespace failed - expected result. 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:08.691 test case2: host connect to nvmf target in multiple paths 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.691 02:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:08.691 [2024-12-16 02:31:39.001609] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:08.691 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.691 02:31:39 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:09.630 02:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:11.012 02:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.012 02:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:11.012 02:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.012 02:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:11.012 02:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:12.921 02:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:12.921 02:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:12.921 02:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:12.921 02:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:12.921 02:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:12.921 02:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:12.921 02:31:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:12.921 [global] 00:10:12.921 thread=1 00:10:12.921 invalidate=1 00:10:12.921 rw=write 00:10:12.921 time_based=1 00:10:12.921 runtime=1 00:10:12.921 ioengine=libaio 00:10:12.921 direct=1 00:10:12.921 bs=4096 00:10:12.921 iodepth=1 00:10:12.921 norandommap=0 00:10:12.921 numjobs=1 00:10:12.921 00:10:12.921 verify_dump=1 00:10:12.921 verify_backlog=512 00:10:12.921 verify_state_save=0 00:10:12.921 do_verify=1 00:10:12.921 verify=crc32c-intel 00:10:12.921 [job0] 00:10:12.921 filename=/dev/nvme0n1 00:10:12.921 Could not set queue depth (nvme0n1) 00:10:13.181 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:13.181 fio-3.35 00:10:13.181 Starting 1 thread 00:10:14.564 00:10:14.564 job0: (groupid=0, jobs=1): err= 0: pid=851127: Mon Dec 16 02:31:44 2024 00:10:14.564 read: IOPS=2304, BW=9219KiB/s (9440kB/s)(9228KiB/1001msec) 00:10:14.564 slat (nsec): min=7906, max=40662, avg=8818.66, stdev=1389.33 00:10:14.564 clat (usec): min=159, max=1068, avg=238.32, stdev=32.96 00:10:14.564 lat (usec): min=167, max=1084, avg=247.14, stdev=33.08 00:10:14.564 clat percentiles (usec): 00:10:14.564 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 204], 20.00th=[ 212], 00:10:14.564 | 30.00th=[ 221], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:10:14.564 | 70.00th=[ 258], 80.00th=[ 262], 90.00th=[ 269], 95.00th=[ 273], 00:10:14.564 | 99.00th=[ 285], 99.50th=[ 293], 99.90th=[ 306], 99.95th=[ 429], 00:10:14.564 | 99.99th=[ 1074] 00:10:14.564 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:14.564 slat (nsec): min=9327, max=44153, avg=11943.46, stdev=1831.13 00:10:14.564 clat (usec): min=107, max=1212, avg=150.11, stdev=31.78 00:10:14.564 lat (usec): min=119, max=1224, avg=162.05, stdev=32.12 00:10:14.564 clat percentiles (usec): 00:10:14.564 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 126], 00:10:14.564 | 30.00th=[ 131], 40.00th=[ 149], 50.00th=[ 155], 60.00th=[ 159], 00:10:14.564 | 70.00th=[ 161], 80.00th=[ 165], 90.00th=[ 172], 95.00th=[ 178], 00:10:14.564 | 99.00th=[ 241], 99.50th=[ 243], 99.90th=[ 314], 99.95th=[ 363], 00:10:14.564 | 99.99th=[ 1221] 00:10:14.564 bw ( KiB/s): min=11120, max=11120, per=100.00%, avg=11120.00, stdev= 0.00, samples=1 00:10:14.564 iops : min= 2780, max= 2780, avg=2780.00, stdev= 0.00, samples=1 00:10:14.564 lat (usec) : 250=78.98%, 500=20.98% 00:10:14.564 lat (msec) : 2=0.04% 00:10:14.564 cpu : usr=4.60%, sys=7.70%, ctx=4867, majf=0, minf=1 00:10:14.564 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:14.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.565 issued rwts: total=2307,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.565 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:14.565 00:10:14.565 Run status group 0 (all jobs): 00:10:14.565 READ: bw=9219KiB/s (9440kB/s), 9219KiB/s-9219KiB/s (9440kB/s-9440kB/s), io=9228KiB (9449kB), run=1001-1001msec 00:10:14.565 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:10:14.565 00:10:14.565 Disk stats (read/write): 00:10:14.565 nvme0n1: ios=2098/2335, merge=0/0, ticks=491/318, in_queue=809, util=91.28% 00:10:14.565 02:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:14.565 02:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.565 02:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:14.565 02:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:14.565 02:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.565 02:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:14.565 02:31:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:14.565 rmmod nvme_tcp 00:10:14.565 rmmod nvme_fabrics 00:10:14.565 rmmod nvme_keyring 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 850124 ']' 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 850124 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 850124 ']' 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 850124 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 850124 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 850124' 00:10:14.565 killing process with pid 850124 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 850124 00:10:14.565 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 850124 00:10:14.825 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:14.825 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:14.825 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:14.825 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:14.825 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:14.825 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:14.825 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:14.825 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:14.825 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:14.825 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.825 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.825 02:31:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:16.739 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:16.739 00:10:16.739 real 0m14.968s 00:10:16.739 user 0m32.999s 00:10:16.739 sys 0m5.288s 00:10:16.739 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.739 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:16.739 ************************************ 00:10:16.739 END TEST nvmf_nmic 00:10:16.739 ************************************ 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:16.998 ************************************ 00:10:16.998 START TEST nvmf_fio_target 00:10:16.998 ************************************ 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:16.998 * Looking for test storage... 00:10:16.998 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:16.998 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:16.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.999 --rc genhtml_branch_coverage=1 00:10:16.999 --rc genhtml_function_coverage=1 00:10:16.999 --rc genhtml_legend=1 00:10:16.999 --rc geninfo_all_blocks=1 00:10:16.999 --rc geninfo_unexecuted_blocks=1 00:10:16.999 00:10:16.999 ' 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:16.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.999 --rc genhtml_branch_coverage=1 00:10:16.999 --rc genhtml_function_coverage=1 00:10:16.999 --rc genhtml_legend=1 00:10:16.999 --rc geninfo_all_blocks=1 00:10:16.999 --rc geninfo_unexecuted_blocks=1 00:10:16.999 00:10:16.999 ' 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:16.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.999 --rc genhtml_branch_coverage=1 00:10:16.999 --rc genhtml_function_coverage=1 00:10:16.999 --rc genhtml_legend=1 00:10:16.999 --rc geninfo_all_blocks=1 00:10:16.999 --rc geninfo_unexecuted_blocks=1 00:10:16.999 00:10:16.999 ' 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:16.999 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.999 --rc genhtml_branch_coverage=1 00:10:16.999 --rc genhtml_function_coverage=1 00:10:16.999 --rc genhtml_legend=1 00:10:16.999 --rc geninfo_all_blocks=1 00:10:16.999 --rc geninfo_unexecuted_blocks=1 00:10:16.999 00:10:16.999 ' 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:16.999 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:17.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:17.259 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:17.260 02:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:23.837 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:23.837 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:23.837 Found net devices under 0000:af:00.0: cvl_0_0 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:23.837 Found net devices under 0000:af:00.1: cvl_0_1 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:23.837 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:23.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:10:23.838 00:10:23.838 --- 10.0.0.2 ping statistics --- 00:10:23.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.838 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:23.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:10:23.838 00:10:23.838 --- 10.0.0.1 ping statistics --- 00:10:23.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.838 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=854877 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 854877 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 854877 ']' 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.838 [2024-12-16 02:31:53.775091] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:23.838 [2024-12-16 02:31:53.775134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.838 [2024-12-16 02:31:53.851751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.838 [2024-12-16 02:31:53.874315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.838 [2024-12-16 02:31:53.874350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.838 [2024-12-16 02:31:53.874357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.838 [2024-12-16 02:31:53.874363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.838 [2024-12-16 02:31:53.874367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.838 [2024-12-16 02:31:53.875779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.838 [2024-12-16 02:31:53.875889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.838 [2024-12-16 02:31:53.875935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.838 [2024-12-16 02:31:53.875936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:23.838 02:31:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.838 02:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.838 02:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:23.838 [2024-12-16 02:31:54.181017] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.838 02:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:23.838 02:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:23.838 02:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.097 02:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:24.097 02:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.357 02:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:24.357 02:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.618 02:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:24.618 02:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:24.618 02:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:24.879 02:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:24.879 02:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.138 02:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:25.138 02:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:25.398 02:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:25.398 02:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:25.657 02:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:25.657 02:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:25.657 02:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.917 02:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:25.917 02:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:26.177 02:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:26.437 [2024-12-16 02:31:56.878861] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.437 02:31:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:26.697 02:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:26.697 02:31:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:28.079 02:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:28.079 02:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:28.079 02:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:28.079 02:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:28.079 02:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:28.079 02:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:29.988 02:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:29.988 02:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:29.988 02:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:29.988 02:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:29.988 02:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:29.988 02:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:29.988 02:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:29.988 [global] 00:10:29.988 thread=1 00:10:29.988 invalidate=1 00:10:29.988 rw=write 00:10:29.988 time_based=1 00:10:29.988 runtime=1 00:10:29.988 ioengine=libaio 00:10:29.988 direct=1 00:10:29.988 bs=4096 00:10:29.988 iodepth=1 00:10:29.988 norandommap=0 00:10:29.988 numjobs=1 00:10:29.988 00:10:29.988 verify_dump=1 00:10:29.988 verify_backlog=512 00:10:29.988 verify_state_save=0 00:10:29.988 do_verify=1 00:10:29.988 verify=crc32c-intel 00:10:29.988 [job0] 00:10:29.988 filename=/dev/nvme0n1 00:10:29.988 [job1] 00:10:29.988 filename=/dev/nvme0n2 00:10:29.988 [job2] 00:10:29.988 filename=/dev/nvme0n3 00:10:29.988 [job3] 00:10:29.988 filename=/dev/nvme0n4 00:10:29.988 Could not set queue depth (nvme0n1) 00:10:29.988 Could not set queue depth (nvme0n2) 00:10:29.988 Could not set queue depth (nvme0n3) 00:10:29.988 Could not set queue depth (nvme0n4) 00:10:30.248 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.248 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.248 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.248 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:30.248 fio-3.35 00:10:30.248 Starting 4 threads 00:10:31.628 00:10:31.628 job0: (groupid=0, jobs=1): err= 0: pid=856201: Mon Dec 16 02:32:02 2024 00:10:31.628 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:31.628 slat (nsec): min=6496, max=26362, avg=7275.86, stdev=797.72 00:10:31.628 clat (usec): min=158, max=404, avg=209.97, stdev=28.90 00:10:31.628 lat (usec): min=166, max=412, avg=217.25, stdev=28.92 00:10:31.628 clat percentiles (usec): 00:10:31.628 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 188], 00:10:31.628 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:10:31.628 | 70.00th=[ 217], 80.00th=[ 231], 90.00th=[ 247], 95.00th=[ 277], 00:10:31.628 | 99.00th=[ 302], 99.50th=[ 306], 99.90th=[ 347], 99.95th=[ 392], 00:10:31.628 | 99.99th=[ 404] 00:10:31.628 write: IOPS=2738, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1001msec); 0 zone resets 00:10:31.628 slat (nsec): min=9429, max=42889, avg=10552.20, stdev=1103.22 00:10:31.628 clat (usec): min=111, max=351, avg=147.06, stdev=21.82 00:10:31.628 lat (usec): min=121, max=393, avg=157.61, stdev=21.96 00:10:31.628 clat percentiles (usec): 00:10:31.628 | 1.00th=[ 118], 5.00th=[ 122], 10.00th=[ 126], 20.00th=[ 130], 00:10:31.628 | 30.00th=[ 135], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:10:31.628 | 70.00th=[ 151], 80.00th=[ 163], 90.00th=[ 186], 95.00th=[ 194], 00:10:31.628 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 225], 99.95th=[ 239], 00:10:31.628 | 99.99th=[ 351] 00:10:31.628 bw ( KiB/s): min=12288, max=12288, per=49.98%, avg=12288.00, stdev= 0.00, samples=1 00:10:31.628 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:31.628 lat (usec) : 250=95.70%, 500=4.30% 00:10:31.628 cpu : usr=2.70%, sys=4.90%, ctx=5302, majf=0, minf=1 00:10:31.628 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.628 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.628 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.628 issued rwts: total=2560,2741,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.628 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.628 job1: (groupid=0, jobs=1): err= 0: pid=856202: Mon Dec 16 02:32:02 2024 00:10:31.628 read: IOPS=514, BW=2056KiB/s (2106kB/s)(2116KiB/1029msec) 00:10:31.628 slat (nsec): min=7519, max=25522, avg=9083.51, stdev=2294.13 00:10:31.628 clat (usec): min=202, max=41075, avg=1553.01, stdev=7187.05 00:10:31.629 lat (usec): min=211, max=41089, avg=1562.09, stdev=7188.11 00:10:31.629 clat percentiles (usec): 00:10:31.629 | 1.00th=[ 215], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 235], 00:10:31.629 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:10:31.629 | 70.00th=[ 253], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 269], 00:10:31.629 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:31.629 | 99.99th=[41157] 00:10:31.629 write: IOPS=995, BW=3981KiB/s (4076kB/s)(4096KiB/1029msec); 0 zone resets 00:10:31.629 slat (nsec): min=10708, max=45290, avg=12730.11, stdev=2428.31 00:10:31.629 clat (usec): min=122, max=295, avg=179.02, stdev=30.15 00:10:31.629 lat (usec): min=133, max=330, avg=191.75, stdev=30.60 00:10:31.629 clat percentiles (usec): 00:10:31.629 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 149], 00:10:31.629 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 178], 60.00th=[ 192], 00:10:31.629 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 225], 00:10:31.629 | 99.00th=[ 251], 99.50th=[ 273], 99.90th=[ 293], 99.95th=[ 297], 00:10:31.629 | 99.99th=[ 297] 00:10:31.629 bw ( KiB/s): min= 8192, max= 8192, per=33.32%, avg=8192.00, stdev= 0.00, samples=1 00:10:31.629 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:31.629 lat (usec) : 250=87.51%, 500=11.40% 00:10:31.629 lat (msec) : 50=1.09% 00:10:31.629 cpu : usr=0.88%, sys=3.11%, ctx=1554, majf=0, minf=1 00:10:31.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.629 issued rwts: total=529,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.629 job2: (groupid=0, jobs=1): err= 0: pid=856204: Mon Dec 16 02:32:02 2024 00:10:31.629 read: IOPS=21, BW=86.5KiB/s (88.6kB/s)(88.0KiB/1017msec) 00:10:31.629 slat (nsec): min=9898, max=23225, avg=20368.09, stdev=4016.47 00:10:31.629 clat (usec): min=40864, max=41041, avg=40968.77, stdev=54.86 00:10:31.629 lat (usec): min=40887, max=41063, avg=40989.14, stdev=54.72 00:10:31.629 clat percentiles (usec): 00:10:31.629 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:31.629 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:31.629 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:31.629 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:31.629 | 99.99th=[41157] 00:10:31.629 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:10:31.629 slat (nsec): min=10836, max=38240, avg=12703.36, stdev=2662.03 00:10:31.629 clat (usec): min=132, max=313, avg=208.08, stdev=23.89 00:10:31.629 lat (usec): min=143, max=331, avg=220.78, stdev=24.22 00:10:31.629 clat percentiles (usec): 00:10:31.629 | 1.00th=[ 149], 5.00th=[ 163], 10.00th=[ 178], 20.00th=[ 194], 00:10:31.629 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 212], 00:10:31.629 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 247], 00:10:31.629 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 314], 99.95th=[ 314], 00:10:31.629 | 99.99th=[ 314] 00:10:31.629 bw ( KiB/s): min= 4096, max= 4096, per=16.66%, avg=4096.00, stdev= 0.00, samples=1 00:10:31.629 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:31.629 lat (usec) : 250=92.32%, 500=3.56% 00:10:31.629 lat (msec) : 50=4.12% 00:10:31.629 cpu : usr=0.79%, sys=0.59%, ctx=536, majf=0, minf=1 00:10:31.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.629 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.629 job3: (groupid=0, jobs=1): err= 0: pid=856205: Mon Dec 16 02:32:02 2024 00:10:31.629 read: IOPS=1692, BW=6769KiB/s (6932kB/s)(6776KiB/1001msec) 00:10:31.629 slat (nsec): min=3199, max=31517, avg=5628.86, stdev=1988.60 00:10:31.629 clat (usec): min=203, max=41014, avg=373.66, stdev=2206.51 00:10:31.629 lat (usec): min=207, max=41026, avg=379.29, stdev=2207.17 00:10:31.629 clat percentiles (usec): 00:10:31.629 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:10:31.629 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:10:31.629 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:10:31.629 | 99.00th=[ 424], 99.50th=[ 482], 99.90th=[41157], 99.95th=[41157], 00:10:31.629 | 99.99th=[41157] 00:10:31.629 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:31.629 slat (nsec): min=4035, max=39837, avg=8317.24, stdev=2818.46 00:10:31.629 clat (usec): min=116, max=464, avg=163.08, stdev=28.36 00:10:31.629 lat (usec): min=121, max=503, avg=171.40, stdev=29.50 00:10:31.629 clat percentiles (usec): 00:10:31.629 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 139], 00:10:31.629 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 157], 60.00th=[ 165], 00:10:31.629 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 212], 00:10:31.629 | 99.00th=[ 237], 99.50th=[ 241], 99.90th=[ 347], 99.95th=[ 424], 00:10:31.629 | 99.99th=[ 465] 00:10:31.629 bw ( KiB/s): min= 7288, max= 7288, per=29.64%, avg=7288.00, stdev= 0.00, samples=1 00:10:31.629 iops : min= 1822, max= 1822, avg=1822.00, stdev= 0.00, samples=1 00:10:31.629 lat (usec) : 250=75.92%, 500=23.94% 00:10:31.629 lat (msec) : 50=0.13% 00:10:31.629 cpu : usr=1.10%, sys=3.00%, ctx=3742, majf=0, minf=2 00:10:31.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.629 issued rwts: total=1694,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.629 00:10:31.629 Run status group 0 (all jobs): 00:10:31.629 READ: bw=18.2MiB/s (19.1MB/s), 86.5KiB/s-9.99MiB/s (88.6kB/s-10.5MB/s), io=18.8MiB (19.7MB), run=1001-1029msec 00:10:31.629 WRITE: bw=24.0MiB/s (25.2MB/s), 2014KiB/s-10.7MiB/s (2062kB/s-11.2MB/s), io=24.7MiB (25.9MB), run=1001-1029msec 00:10:31.629 00:10:31.629 Disk stats (read/write): 00:10:31.629 nvme0n1: ios=2075/2329, merge=0/0, ticks=1384/341, in_queue=1725, util=97.70% 00:10:31.629 nvme0n2: ios=523/1024, merge=0/0, ticks=568/165, in_queue=733, util=83.33% 00:10:31.629 nvme0n3: ios=74/512, merge=0/0, ticks=1574/102, in_queue=1676, util=98.16% 00:10:31.629 nvme0n4: ios=1315/1536, merge=0/0, ticks=535/247, in_queue=782, util=89.21% 00:10:31.629 02:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:31.629 [global] 00:10:31.629 thread=1 00:10:31.629 invalidate=1 00:10:31.629 rw=randwrite 00:10:31.629 time_based=1 00:10:31.629 runtime=1 00:10:31.629 ioengine=libaio 00:10:31.629 direct=1 00:10:31.629 bs=4096 00:10:31.629 iodepth=1 00:10:31.629 norandommap=0 00:10:31.629 numjobs=1 00:10:31.629 00:10:31.629 verify_dump=1 00:10:31.629 verify_backlog=512 00:10:31.629 verify_state_save=0 00:10:31.629 do_verify=1 00:10:31.629 verify=crc32c-intel 00:10:31.629 [job0] 00:10:31.629 filename=/dev/nvme0n1 00:10:31.629 [job1] 00:10:31.629 filename=/dev/nvme0n2 00:10:31.629 [job2] 00:10:31.629 filename=/dev/nvme0n3 00:10:31.629 [job3] 00:10:31.629 filename=/dev/nvme0n4 00:10:31.629 Could not set queue depth (nvme0n1) 00:10:31.629 Could not set queue depth (nvme0n2) 00:10:31.629 Could not set queue depth (nvme0n3) 00:10:31.629 Could not set queue depth (nvme0n4) 00:10:31.889 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.889 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.889 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.889 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.889 fio-3.35 00:10:31.889 Starting 4 threads 00:10:33.269 00:10:33.269 job0: (groupid=0, jobs=1): err= 0: pid=856565: Mon Dec 16 02:32:03 2024 00:10:33.269 read: IOPS=40, BW=162KiB/s (166kB/s)(168KiB/1036msec) 00:10:33.270 slat (nsec): min=7104, max=24420, avg=16424.43, stdev=7485.76 00:10:33.270 clat (usec): min=178, max=42332, avg=22518.87, stdev=20516.91 00:10:33.270 lat (usec): min=188, max=42339, avg=22535.29, stdev=20515.47 00:10:33.270 clat percentiles (usec): 00:10:33.270 | 1.00th=[ 180], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 219], 00:10:33.270 | 30.00th=[ 223], 40.00th=[ 243], 50.00th=[40633], 60.00th=[40633], 00:10:33.270 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:33.270 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:33.270 | 99.99th=[42206] 00:10:33.270 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:10:33.270 slat (nsec): min=9260, max=37137, avg=12346.60, stdev=2287.78 00:10:33.270 clat (usec): min=135, max=364, avg=159.33, stdev=14.03 00:10:33.270 lat (usec): min=146, max=401, avg=171.68, stdev=14.97 00:10:33.270 clat percentiles (usec): 00:10:33.270 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 151], 00:10:33.270 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 161], 00:10:33.270 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 174], 95.00th=[ 178], 00:10:33.270 | 99.00th=[ 192], 99.50th=[ 200], 99.90th=[ 367], 99.95th=[ 367], 00:10:33.270 | 99.99th=[ 367] 00:10:33.270 bw ( KiB/s): min= 4096, max= 4096, per=34.53%, avg=4096.00, stdev= 0.00, samples=1 00:10:33.270 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:33.270 lat (usec) : 250=95.49%, 500=0.36% 00:10:33.270 lat (msec) : 50=4.15% 00:10:33.270 cpu : usr=0.48%, sys=0.48%, ctx=556, majf=0, minf=1 00:10:33.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.270 issued rwts: total=42,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.270 job1: (groupid=0, jobs=1): err= 0: pid=856566: Mon Dec 16 02:32:03 2024 00:10:33.270 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:10:33.270 slat (nsec): min=9137, max=24463, avg=21721.86, stdev=2981.86 00:10:33.270 clat (usec): min=40571, max=41060, avg=40951.59, stdev=93.93 00:10:33.270 lat (usec): min=40580, max=41085, avg=40973.32, stdev=96.48 00:10:33.270 clat percentiles (usec): 00:10:33.270 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:33.270 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:33.270 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:33.270 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:33.270 | 99.99th=[41157] 00:10:33.270 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:10:33.270 slat (nsec): min=9449, max=39983, avg=10911.77, stdev=2141.85 00:10:33.270 clat (usec): min=131, max=394, avg=190.25, stdev=21.77 00:10:33.270 lat (usec): min=142, max=405, avg=201.16, stdev=22.07 00:10:33.270 clat percentiles (usec): 00:10:33.270 | 1.00th=[ 151], 5.00th=[ 161], 10.00th=[ 167], 20.00th=[ 174], 00:10:33.270 | 30.00th=[ 180], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 194], 00:10:33.270 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 225], 00:10:33.270 | 99.00th=[ 247], 99.50th=[ 269], 99.90th=[ 396], 99.95th=[ 396], 00:10:33.270 | 99.99th=[ 396] 00:10:33.270 bw ( KiB/s): min= 4096, max= 4096, per=34.53%, avg=4096.00, stdev= 0.00, samples=1 00:10:33.270 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:33.270 lat (usec) : 250=95.13%, 500=0.75% 00:10:33.270 lat (msec) : 50=4.12% 00:10:33.270 cpu : usr=0.60%, sys=0.70%, ctx=534, majf=0, minf=2 00:10:33.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.270 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.270 job2: (groupid=0, jobs=1): err= 0: pid=856567: Mon Dec 16 02:32:03 2024 00:10:33.270 read: IOPS=38, BW=154KiB/s (158kB/s)(156KiB/1012msec) 00:10:33.270 slat (nsec): min=8374, max=27062, avg=17177.18, stdev=7209.01 00:10:33.270 clat (usec): min=201, max=42015, avg=23242.19, stdev=20492.72 00:10:33.270 lat (usec): min=210, max=42027, avg=23259.37, stdev=20499.09 00:10:33.270 clat percentiles (usec): 00:10:33.270 | 1.00th=[ 202], 5.00th=[ 206], 10.00th=[ 219], 20.00th=[ 225], 00:10:33.270 | 30.00th=[ 237], 40.00th=[ 265], 50.00th=[41157], 60.00th=[41157], 00:10:33.270 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:33.270 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:33.270 | 99.99th=[42206] 00:10:33.270 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:10:33.270 slat (nsec): min=10451, max=37770, avg=12575.11, stdev=2107.43 00:10:33.270 clat (usec): min=128, max=311, avg=188.77, stdev=20.26 00:10:33.270 lat (usec): min=140, max=324, avg=201.34, stdev=20.49 00:10:33.270 clat percentiles (usec): 00:10:33.270 | 1.00th=[ 147], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 174], 00:10:33.270 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:10:33.270 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 212], 95.00th=[ 221], 00:10:33.270 | 99.00th=[ 247], 99.50th=[ 269], 99.90th=[ 314], 99.95th=[ 314], 00:10:33.270 | 99.99th=[ 314] 00:10:33.270 bw ( KiB/s): min= 4096, max= 4096, per=34.53%, avg=4096.00, stdev= 0.00, samples=1 00:10:33.270 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:33.270 lat (usec) : 250=94.56%, 500=1.45% 00:10:33.270 lat (msec) : 50=3.99% 00:10:33.270 cpu : usr=0.10%, sys=1.29%, ctx=553, majf=0, minf=1 00:10:33.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.270 issued rwts: total=39,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.270 job3: (groupid=0, jobs=1): err= 0: pid=856568: Mon Dec 16 02:32:03 2024 00:10:33.270 read: IOPS=1048, BW=4195KiB/s (4296kB/s)(4216KiB/1005msec) 00:10:33.270 slat (nsec): min=6831, max=38543, avg=7928.70, stdev=2469.18 00:10:33.270 clat (usec): min=158, max=42319, avg=713.77, stdev=4516.17 00:10:33.270 lat (usec): min=165, max=42326, avg=721.70, stdev=4516.85 00:10:33.270 clat percentiles (usec): 00:10:33.270 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:10:33.270 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 202], 60.00th=[ 215], 00:10:33.270 | 70.00th=[ 229], 80.00th=[ 243], 90.00th=[ 253], 95.00th=[ 262], 00:10:33.270 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:10:33.270 | 99.99th=[42206] 00:10:33.270 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:10:33.270 slat (nsec): min=9432, max=38274, avg=10980.70, stdev=2259.38 00:10:33.270 clat (usec): min=111, max=339, avg=144.09, stdev=22.13 00:10:33.270 lat (usec): min=121, max=374, avg=155.07, stdev=23.16 00:10:33.270 clat percentiles (usec): 00:10:33.270 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 124], 20.00th=[ 127], 00:10:33.270 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 143], 00:10:33.270 | 70.00th=[ 155], 80.00th=[ 163], 90.00th=[ 176], 95.00th=[ 184], 00:10:33.270 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 318], 99.95th=[ 338], 00:10:33.270 | 99.99th=[ 338] 00:10:33.270 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:33.270 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:33.270 lat (usec) : 250=94.67%, 500=4.75%, 750=0.04%, 1000=0.04% 00:10:33.270 lat (msec) : 50=0.50% 00:10:33.270 cpu : usr=1.39%, sys=2.39%, ctx=2592, majf=0, minf=1 00:10:33.270 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:33.270 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.270 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:33.270 issued rwts: total=1054,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:33.270 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:33.270 00:10:33.270 Run status group 0 (all jobs): 00:10:33.270 READ: bw=4467KiB/s (4574kB/s), 87.5KiB/s-4195KiB/s (89.6kB/s-4296kB/s), io=4628KiB (4739kB), run=1005-1036msec 00:10:33.270 WRITE: bw=11.6MiB/s (12.1MB/s), 1977KiB/s-6113KiB/s (2024kB/s-6260kB/s), io=12.0MiB (12.6MB), run=1005-1036msec 00:10:33.270 00:10:33.270 Disk stats (read/write): 00:10:33.270 nvme0n1: ios=87/512, merge=0/0, ticks=997/77, in_queue=1074, util=98.40% 00:10:33.270 nvme0n2: ios=38/512, merge=0/0, ticks=748/88, in_queue=836, util=87.31% 00:10:33.270 nvme0n3: ios=93/512, merge=0/0, ticks=1667/91, in_queue=1758, util=98.86% 00:10:33.270 nvme0n4: ios=1074/1536, merge=0/0, ticks=1561/211, in_queue=1772, util=98.95% 00:10:33.270 02:32:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:33.270 [global] 00:10:33.271 thread=1 00:10:33.271 invalidate=1 00:10:33.271 rw=write 00:10:33.271 time_based=1 00:10:33.271 runtime=1 00:10:33.271 ioengine=libaio 00:10:33.271 direct=1 00:10:33.271 bs=4096 00:10:33.271 iodepth=128 00:10:33.271 norandommap=0 00:10:33.271 numjobs=1 00:10:33.271 00:10:33.271 verify_dump=1 00:10:33.271 verify_backlog=512 00:10:33.271 verify_state_save=0 00:10:33.271 do_verify=1 00:10:33.271 verify=crc32c-intel 00:10:33.271 [job0] 00:10:33.271 filename=/dev/nvme0n1 00:10:33.271 [job1] 00:10:33.271 filename=/dev/nvme0n2 00:10:33.271 [job2] 00:10:33.271 filename=/dev/nvme0n3 00:10:33.271 [job3] 00:10:33.271 filename=/dev/nvme0n4 00:10:33.271 Could not set queue depth (nvme0n1) 00:10:33.271 Could not set queue depth (nvme0n2) 00:10:33.271 Could not set queue depth (nvme0n3) 00:10:33.271 Could not set queue depth (nvme0n4) 00:10:33.531 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.531 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.531 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.531 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:33.531 fio-3.35 00:10:33.531 Starting 4 threads 00:10:34.913 00:10:34.913 job0: (groupid=0, jobs=1): err= 0: pid=856958: Mon Dec 16 02:32:05 2024 00:10:34.913 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:10:34.913 slat (nsec): min=1400, max=27086k, avg=101235.11, stdev=865786.50 00:10:34.913 clat (usec): min=4087, max=58772, avg=13449.32, stdev=7412.44 00:10:34.913 lat (usec): min=4665, max=58799, avg=13550.55, stdev=7476.79 00:10:34.913 clat percentiles (usec): 00:10:34.913 | 1.00th=[ 5735], 5.00th=[ 6718], 10.00th=[ 7308], 20.00th=[ 8717], 00:10:34.913 | 30.00th=[ 9634], 40.00th=[10421], 50.00th=[11076], 60.00th=[12125], 00:10:34.913 | 70.00th=[13566], 80.00th=[17171], 90.00th=[22152], 95.00th=[31851], 00:10:34.913 | 99.00th=[43254], 99.50th=[44827], 99.90th=[45351], 99.95th=[45351], 00:10:34.913 | 99.99th=[58983] 00:10:34.913 write: IOPS=4574, BW=17.9MiB/s (18.7MB/s)(17.9MiB/1003msec); 0 zone resets 00:10:34.913 slat (usec): min=2, max=20623, avg=109.16, stdev=765.12 00:10:34.913 clat (usec): min=281, max=53610, avg=15188.32, stdev=8983.84 00:10:34.913 lat (usec): min=288, max=53622, avg=15297.48, stdev=9053.41 00:10:34.913 clat percentiles (usec): 00:10:34.913 | 1.00th=[ 2376], 5.00th=[ 3130], 10.00th=[ 5735], 20.00th=[ 8225], 00:10:34.913 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[12911], 60.00th=[17171], 00:10:34.913 | 70.00th=[20055], 80.00th=[21103], 90.00th=[25297], 95.00th=[28443], 00:10:34.913 | 99.00th=[50070], 99.50th=[52691], 99.90th=[53740], 99.95th=[53740], 00:10:34.913 | 99.99th=[53740] 00:10:34.913 bw ( KiB/s): min=12864, max=22816, per=24.38%, avg=17840.00, stdev=7037.13, samples=2 00:10:34.913 iops : min= 3218, max= 5704, avg=4461.00, stdev=1757.87, samples=2 00:10:34.913 lat (usec) : 500=0.02% 00:10:34.913 lat (msec) : 2=0.25%, 4=3.07%, 10=34.67%, 20=38.39%, 50=23.03% 00:10:34.913 lat (msec) : 100=0.55% 00:10:34.913 cpu : usr=3.49%, sys=4.69%, ctx=507, majf=0, minf=1 00:10:34.913 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:34.913 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.913 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.913 issued rwts: total=4096,4588,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.913 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.913 job1: (groupid=0, jobs=1): err= 0: pid=856977: Mon Dec 16 02:32:05 2024 00:10:34.913 read: IOPS=4140, BW=16.2MiB/s (17.0MB/s)(16.3MiB/1008msec) 00:10:34.913 slat (nsec): min=1303, max=13637k, avg=117940.98, stdev=887820.80 00:10:34.913 clat (usec): min=3527, max=55268, avg=14429.00, stdev=7328.89 00:10:34.913 lat (usec): min=3540, max=55273, avg=14546.94, stdev=7396.80 00:10:34.913 clat percentiles (usec): 00:10:34.913 | 1.00th=[ 5014], 5.00th=[ 7504], 10.00th=[ 8979], 20.00th=[ 9634], 00:10:34.913 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11994], 60.00th=[13566], 00:10:34.913 | 70.00th=[15533], 80.00th=[19268], 90.00th=[22938], 95.00th=[28443], 00:10:34.913 | 99.00th=[49021], 99.50th=[52691], 99.90th=[55313], 99.95th=[55313], 00:10:34.913 | 99.99th=[55313] 00:10:34.913 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:10:34.913 slat (usec): min=2, max=13833, avg=99.00, stdev=632.22 00:10:34.913 clat (usec): min=289, max=64235, avg=14677.98, stdev=12581.61 00:10:34.913 lat (usec): min=353, max=64242, avg=14776.98, stdev=12672.47 00:10:34.913 clat percentiles (usec): 00:10:34.913 | 1.00th=[ 2311], 5.00th=[ 4359], 10.00th=[ 6652], 20.00th=[ 8455], 00:10:34.913 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10290], 00:10:34.913 | 70.00th=[12780], 80.00th=[19792], 90.00th=[25822], 95.00th=[51119], 00:10:34.913 | 99.00th=[59507], 99.50th=[60031], 99.90th=[64226], 99.95th=[64226], 00:10:34.913 | 99.99th=[64226] 00:10:34.913 bw ( KiB/s): min=15552, max=20912, per=24.91%, avg=18232.00, stdev=3790.09, samples=2 00:10:34.913 iops : min= 3888, max= 5228, avg=4558.00, stdev=947.52, samples=2 00:10:34.913 lat (usec) : 500=0.05%, 1000=0.11% 00:10:34.913 lat (msec) : 2=0.28%, 4=2.12%, 10=39.85%, 20=38.08%, 50=16.23% 00:10:34.913 lat (msec) : 100=3.28% 00:10:34.913 cpu : usr=4.37%, sys=6.06%, ctx=363, majf=0, minf=1 00:10:34.913 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:34.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.914 issued rwts: total=4174,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.914 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.914 job2: (groupid=0, jobs=1): err= 0: pid=856997: Mon Dec 16 02:32:05 2024 00:10:34.914 read: IOPS=4359, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1004msec) 00:10:34.914 slat (nsec): min=1025, max=28798k, avg=106421.06, stdev=857735.63 00:10:34.914 clat (usec): min=1588, max=56779, avg=14701.93, stdev=7740.39 00:10:34.914 lat (usec): min=3972, max=56785, avg=14808.36, stdev=7784.29 00:10:34.914 clat percentiles (usec): 00:10:34.914 | 1.00th=[ 5604], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10814], 00:10:34.914 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11863], 60.00th=[12518], 00:10:34.914 | 70.00th=[14222], 80.00th=[16909], 90.00th=[22414], 95.00th=[33162], 00:10:34.914 | 99.00th=[47973], 99.50th=[49546], 99.90th=[51643], 99.95th=[51643], 00:10:34.914 | 99.99th=[56886] 00:10:34.914 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:10:34.914 slat (nsec): min=1910, max=16460k, avg=99713.17, stdev=654543.51 00:10:34.914 clat (usec): min=729, max=83686, avg=13690.38, stdev=8637.61 00:10:34.914 lat (usec): min=737, max=83689, avg=13790.09, stdev=8703.15 00:10:34.914 clat percentiles (usec): 00:10:34.914 | 1.00th=[ 4948], 5.00th=[ 6587], 10.00th=[ 8094], 20.00th=[ 9372], 00:10:34.914 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11600], 00:10:34.914 | 70.00th=[12911], 80.00th=[15533], 90.00th=[21103], 95.00th=[30540], 00:10:34.914 | 99.00th=[55837], 99.50th=[55837], 99.90th=[83362], 99.95th=[83362], 00:10:34.914 | 99.99th=[83362] 00:10:34.914 bw ( KiB/s): min=16384, max=20480, per=25.19%, avg=18432.00, stdev=2896.31, samples=2 00:10:34.914 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:34.914 lat (usec) : 750=0.02%, 1000=0.01% 00:10:34.914 lat (msec) : 2=0.02%, 4=0.19%, 10=17.73%, 20=67.79%, 50=13.08% 00:10:34.914 lat (msec) : 100=1.16% 00:10:34.914 cpu : usr=2.89%, sys=4.59%, ctx=456, majf=0, minf=2 00:10:34.914 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:34.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.914 issued rwts: total=4377,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.914 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.914 job3: (groupid=0, jobs=1): err= 0: pid=857003: Mon Dec 16 02:32:05 2024 00:10:34.914 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:10:34.914 slat (nsec): min=1152, max=27677k, avg=111701.69, stdev=833944.40 00:10:34.914 clat (usec): min=4300, max=48201, avg=13802.00, stdev=6217.45 00:10:34.914 lat (usec): min=4307, max=48232, avg=13913.70, stdev=6260.01 00:10:34.914 clat percentiles (usec): 00:10:34.914 | 1.00th=[ 6390], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[10159], 00:10:34.914 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11600], 60.00th=[12649], 00:10:34.914 | 70.00th=[14615], 80.00th=[16712], 90.00th=[20579], 95.00th=[25822], 00:10:34.914 | 99.00th=[45876], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:10:34.914 | 99.99th=[47973] 00:10:34.914 write: IOPS=4632, BW=18.1MiB/s (19.0MB/s)(18.1MiB/1001msec); 0 zone resets 00:10:34.914 slat (nsec): min=1980, max=9007.6k, avg=97078.51, stdev=443920.51 00:10:34.914 clat (usec): min=576, max=44650, avg=13675.93, stdev=6513.11 00:10:34.914 lat (usec): min=1375, max=44659, avg=13773.01, stdev=6558.20 00:10:34.914 clat percentiles (usec): 00:10:34.914 | 1.00th=[ 3720], 5.00th=[ 7570], 10.00th=[ 9765], 20.00th=[10552], 00:10:34.914 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11600], 00:10:34.914 | 70.00th=[12649], 80.00th=[17695], 90.00th=[21103], 95.00th=[25822], 00:10:34.914 | 99.00th=[40633], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:10:34.914 | 99.99th=[44827] 00:10:34.914 bw ( KiB/s): min=19256, max=19256, per=26.31%, avg=19256.00, stdev= 0.00, samples=1 00:10:34.914 iops : min= 4814, max= 4814, avg=4814.00, stdev= 0.00, samples=1 00:10:34.914 lat (usec) : 750=0.01% 00:10:34.914 lat (msec) : 2=0.16%, 4=0.35%, 10=14.06%, 20=72.58%, 50=12.84% 00:10:34.914 cpu : usr=3.40%, sys=3.90%, ctx=621, majf=0, minf=2 00:10:34.914 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:34.914 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.914 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:34.914 issued rwts: total=4608,4637,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.914 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:34.914 00:10:34.914 Run status group 0 (all jobs): 00:10:34.914 READ: bw=66.9MiB/s (70.1MB/s), 16.0MiB/s-18.0MiB/s (16.7MB/s-18.9MB/s), io=67.4MiB (70.7MB), run=1001-1008msec 00:10:34.914 WRITE: bw=71.5MiB/s (74.9MB/s), 17.9MiB/s-18.1MiB/s (18.7MB/s-19.0MB/s), io=72.0MiB (75.5MB), run=1001-1008msec 00:10:34.914 00:10:34.914 Disk stats (read/write): 00:10:34.914 nvme0n1: ios=3101/3253, merge=0/0, ticks=38125/42988, in_queue=81113, util=98.20% 00:10:34.914 nvme0n2: ios=4128/4175, merge=0/0, ticks=55568/45775, in_queue=101343, util=99.28% 00:10:34.914 nvme0n3: ios=3810/4096, merge=0/0, ticks=33404/37146, in_queue=70550, util=88.64% 00:10:34.914 nvme0n4: ios=3584/4096, merge=0/0, ticks=27800/31665, in_queue=59465, util=89.61% 00:10:34.914 02:32:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:34.914 [global] 00:10:34.914 thread=1 00:10:34.914 invalidate=1 00:10:34.914 rw=randwrite 00:10:34.914 time_based=1 00:10:34.914 runtime=1 00:10:34.914 ioengine=libaio 00:10:34.914 direct=1 00:10:34.914 bs=4096 00:10:34.914 iodepth=128 00:10:34.914 norandommap=0 00:10:34.914 numjobs=1 00:10:34.914 00:10:34.914 verify_dump=1 00:10:34.914 verify_backlog=512 00:10:34.914 verify_state_save=0 00:10:34.914 do_verify=1 00:10:34.914 verify=crc32c-intel 00:10:34.914 [job0] 00:10:34.914 filename=/dev/nvme0n1 00:10:34.914 [job1] 00:10:34.914 filename=/dev/nvme0n2 00:10:34.914 [job2] 00:10:34.914 filename=/dev/nvme0n3 00:10:34.914 [job3] 00:10:34.914 filename=/dev/nvme0n4 00:10:34.914 Could not set queue depth (nvme0n1) 00:10:34.914 Could not set queue depth (nvme0n2) 00:10:34.914 Could not set queue depth (nvme0n3) 00:10:34.914 Could not set queue depth (nvme0n4) 00:10:35.173 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:35.173 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:35.173 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:35.173 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:35.173 fio-3.35 00:10:35.173 Starting 4 threads 00:10:36.579 00:10:36.579 job0: (groupid=0, jobs=1): err= 0: pid=857427: Mon Dec 16 02:32:06 2024 00:10:36.579 read: IOPS=3683, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1005msec) 00:10:36.579 slat (nsec): min=1184, max=12867k, avg=109443.71, stdev=654956.74 00:10:36.579 clat (usec): min=1446, max=28416, avg=13370.21, stdev=3295.37 00:10:36.579 lat (usec): min=4563, max=28442, avg=13479.65, stdev=3357.52 00:10:36.579 clat percentiles (usec): 00:10:36.579 | 1.00th=[ 4686], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10421], 00:10:36.579 | 30.00th=[11338], 40.00th=[12518], 50.00th=[13304], 60.00th=[14353], 00:10:36.579 | 70.00th=[14877], 80.00th=[15533], 90.00th=[17433], 95.00th=[19530], 00:10:36.579 | 99.00th=[22676], 99.50th=[24773], 99.90th=[26346], 99.95th=[26346], 00:10:36.579 | 99.99th=[28443] 00:10:36.579 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:10:36.579 slat (usec): min=2, max=17240, avg=140.57, stdev=780.17 00:10:36.579 clat (usec): min=7866, max=40878, avg=19037.21, stdev=7814.92 00:10:36.579 lat (usec): min=7872, max=44825, avg=19177.78, stdev=7885.85 00:10:36.579 clat percentiles (usec): 00:10:36.579 | 1.00th=[ 8848], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[12125], 00:10:36.579 | 30.00th=[12780], 40.00th=[14877], 50.00th=[15401], 60.00th=[21103], 00:10:36.579 | 70.00th=[24773], 80.00th=[26608], 90.00th=[30802], 95.00th=[32900], 00:10:36.579 | 99.00th=[35914], 99.50th=[36963], 99.90th=[40633], 99.95th=[40633], 00:10:36.579 | 99.99th=[40633] 00:10:36.579 bw ( KiB/s): min=12912, max=19776, per=24.65%, avg=16344.00, stdev=4853.58, samples=2 00:10:36.579 iops : min= 3228, max= 4944, avg=4086.00, stdev=1213.40, samples=2 00:10:36.579 lat (msec) : 2=0.01%, 10=14.22%, 20=61.73%, 50=24.03% 00:10:36.579 cpu : usr=2.59%, sys=4.38%, ctx=345, majf=0, minf=1 00:10:36.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:36.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.579 issued rwts: total=3702,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.579 job1: (groupid=0, jobs=1): err= 0: pid=857440: Mon Dec 16 02:32:06 2024 00:10:36.579 read: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:10:36.579 slat (nsec): min=1139, max=10586k, avg=108810.70, stdev=688894.05 00:10:36.579 clat (usec): min=3925, max=36433, avg=14551.02, stdev=5600.25 00:10:36.579 lat (usec): min=3934, max=36442, avg=14659.83, stdev=5662.84 00:10:36.579 clat percentiles (usec): 00:10:36.579 | 1.00th=[ 4113], 5.00th=[ 7832], 10.00th=[ 8848], 20.00th=[ 9634], 00:10:36.579 | 30.00th=[10290], 40.00th=[12256], 50.00th=[13960], 60.00th=[15270], 00:10:36.579 | 70.00th=[16712], 80.00th=[19006], 90.00th=[21627], 95.00th=[24773], 00:10:36.579 | 99.00th=[31327], 99.50th=[32637], 99.90th=[36439], 99.95th=[36439], 00:10:36.579 | 99.99th=[36439] 00:10:36.579 write: IOPS=3628, BW=14.2MiB/s (14.9MB/s)(14.3MiB/1006msec); 0 zone resets 00:10:36.579 slat (nsec): min=1889, max=13696k, avg=149265.69, stdev=743303.29 00:10:36.579 clat (usec): min=2343, max=45449, avg=20660.24, stdev=10299.18 00:10:36.579 lat (usec): min=2350, max=45459, avg=20809.51, stdev=10368.06 00:10:36.579 clat percentiles (usec): 00:10:36.579 | 1.00th=[ 3458], 5.00th=[ 5276], 10.00th=[ 8160], 20.00th=[10028], 00:10:36.579 | 30.00th=[10945], 40.00th=[15795], 50.00th=[22414], 60.00th=[24511], 00:10:36.579 | 70.00th=[27132], 80.00th=[31065], 90.00th=[33817], 95.00th=[36963], 00:10:36.579 | 99.00th=[43254], 99.50th=[43779], 99.90th=[45351], 99.95th=[45351], 00:10:36.579 | 99.99th=[45351] 00:10:36.579 bw ( KiB/s): min= 9616, max=19056, per=21.62%, avg=14336.00, stdev=6675.09, samples=2 00:10:36.579 iops : min= 2404, max= 4764, avg=3584.00, stdev=1668.77, samples=2 00:10:36.579 lat (msec) : 4=0.87%, 10=19.16%, 20=45.22%, 50=34.75% 00:10:36.579 cpu : usr=2.59%, sys=4.98%, ctx=390, majf=0, minf=1 00:10:36.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:36.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.580 issued rwts: total=3584,3650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.580 job2: (groupid=0, jobs=1): err= 0: pid=857456: Mon Dec 16 02:32:06 2024 00:10:36.580 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:10:36.580 slat (nsec): min=1099, max=47015k, avg=186561.77, stdev=1278502.26 00:10:36.580 clat (usec): min=5728, max=67983, avg=23501.85, stdev=13593.68 00:10:36.580 lat (usec): min=6193, max=67990, avg=23688.41, stdev=13650.06 00:10:36.580 clat percentiles (usec): 00:10:36.580 | 1.00th=[ 8586], 5.00th=[ 9634], 10.00th=[10552], 20.00th=[11731], 00:10:36.580 | 30.00th=[12125], 40.00th=[13042], 50.00th=[23462], 60.00th=[25297], 00:10:36.580 | 70.00th=[28443], 80.00th=[31851], 90.00th=[44303], 95.00th=[48497], 00:10:36.580 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:10:36.580 | 99.99th=[67634] 00:10:36.580 write: IOPS=3396, BW=13.3MiB/s (13.9MB/s)(13.3MiB/1005msec); 0 zone resets 00:10:36.580 slat (nsec): min=1797, max=15756k, avg=120503.01, stdev=689678.03 00:10:36.580 clat (usec): min=3699, max=61101, avg=15944.44, stdev=7675.33 00:10:36.580 lat (usec): min=5094, max=61107, avg=16064.94, stdev=7675.56 00:10:36.580 clat percentiles (usec): 00:10:36.580 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11207], 00:10:36.580 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13435], 60.00th=[15926], 00:10:36.580 | 70.00th=[16319], 80.00th=[18482], 90.00th=[22676], 95.00th=[29492], 00:10:36.580 | 99.00th=[56361], 99.50th=[60031], 99.90th=[61080], 99.95th=[61080], 00:10:36.580 | 99.99th=[61080] 00:10:36.580 bw ( KiB/s): min=13104, max=13184, per=19.83%, avg=13144.00, stdev=56.57, samples=2 00:10:36.580 iops : min= 3276, max= 3296, avg=3286.00, stdev=14.14, samples=2 00:10:36.580 lat (msec) : 4=0.02%, 10=7.26%, 20=60.06%, 50=29.58%, 100=3.08% 00:10:36.580 cpu : usr=2.19%, sys=2.89%, ctx=328, majf=0, minf=1 00:10:36.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:10:36.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.580 issued rwts: total=3072,3413,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.580 job3: (groupid=0, jobs=1): err= 0: pid=857462: Mon Dec 16 02:32:06 2024 00:10:36.580 read: IOPS=5618, BW=21.9MiB/s (23.0MB/s)(22.9MiB/1044msec) 00:10:36.580 slat (nsec): min=1431, max=3977.6k, avg=82132.15, stdev=429274.12 00:10:36.580 clat (usec): min=6580, max=50734, avg=11522.25, stdev=5509.20 00:10:36.580 lat (usec): min=6593, max=52441, avg=11604.38, stdev=5516.55 00:10:36.580 clat percentiles (usec): 00:10:36.580 | 1.00th=[ 7373], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[ 9634], 00:10:36.580 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10814], 60.00th=[11207], 00:10:36.580 | 70.00th=[11600], 80.00th=[11863], 90.00th=[12649], 95.00th=[13304], 00:10:36.580 | 99.00th=[47449], 99.50th=[50070], 99.90th=[50594], 99.95th=[50594], 00:10:36.580 | 99.99th=[50594] 00:10:36.580 write: IOPS=5885, BW=23.0MiB/s (24.1MB/s)(24.0MiB/1044msec); 0 zone resets 00:10:36.580 slat (usec): min=2, max=9520, avg=79.21, stdev=421.36 00:10:36.580 clat (usec): min=6637, max=20554, avg=10548.28, stdev=1247.48 00:10:36.580 lat (usec): min=6640, max=20570, avg=10627.49, stdev=1299.11 00:10:36.580 clat percentiles (usec): 00:10:36.580 | 1.00th=[ 8029], 5.00th=[ 8979], 10.00th=[ 9110], 20.00th=[ 9372], 00:10:36.580 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10814], 60.00th=[11207], 00:10:36.580 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11600], 95.00th=[11863], 00:10:36.580 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15795], 99.95th=[17695], 00:10:36.580 | 99.99th=[20579] 00:10:36.580 bw ( KiB/s): min=23760, max=25392, per=37.07%, avg=24576.00, stdev=1154.00, samples=2 00:10:36.580 iops : min= 5940, max= 6348, avg=6144.00, stdev=288.50, samples=2 00:10:36.580 lat (msec) : 10=39.13%, 20=59.81%, 50=0.72%, 100=0.35% 00:10:36.580 cpu : usr=4.03%, sys=7.09%, ctx=618, majf=0, minf=2 00:10:36.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:36.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:36.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:36.580 issued rwts: total=5866,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:36.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:36.580 00:10:36.580 Run status group 0 (all jobs): 00:10:36.580 READ: bw=60.7MiB/s (63.7MB/s), 11.9MiB/s-21.9MiB/s (12.5MB/s-23.0MB/s), io=63.4MiB (66.5MB), run=1005-1044msec 00:10:36.580 WRITE: bw=64.7MiB/s (67.9MB/s), 13.3MiB/s-23.0MiB/s (13.9MB/s-24.1MB/s), io=67.6MiB (70.9MB), run=1005-1044msec 00:10:36.580 00:10:36.580 Disk stats (read/write): 00:10:36.580 nvme0n1: ios=3357/3584, merge=0/0, ticks=22070/30290, in_queue=52360, util=99.50% 00:10:36.580 nvme0n2: ios=3108/3343, merge=0/0, ticks=25944/37490, in_queue=63434, util=99.70% 00:10:36.580 nvme0n3: ios=2701/3072, merge=0/0, ticks=16370/10926, in_queue=27296, util=97.72% 00:10:36.580 nvme0n4: ios=4974/5120, merge=0/0, ticks=18640/20240, in_queue=38880, util=98.53% 00:10:36.580 02:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:36.580 02:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=857583 00:10:36.580 02:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:36.580 02:32:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:36.580 [global] 00:10:36.580 thread=1 00:10:36.580 invalidate=1 00:10:36.580 rw=read 00:10:36.580 time_based=1 00:10:36.580 runtime=10 00:10:36.580 ioengine=libaio 00:10:36.580 direct=1 00:10:36.580 bs=4096 00:10:36.580 iodepth=1 00:10:36.580 norandommap=1 00:10:36.580 numjobs=1 00:10:36.580 00:10:36.580 [job0] 00:10:36.580 filename=/dev/nvme0n1 00:10:36.580 [job1] 00:10:36.580 filename=/dev/nvme0n2 00:10:36.580 [job2] 00:10:36.580 filename=/dev/nvme0n3 00:10:36.580 [job3] 00:10:36.580 filename=/dev/nvme0n4 00:10:36.580 Could not set queue depth (nvme0n1) 00:10:36.580 Could not set queue depth (nvme0n2) 00:10:36.580 Could not set queue depth (nvme0n3) 00:10:36.580 Could not set queue depth (nvme0n4) 00:10:36.844 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.844 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.844 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.844 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:36.844 fio-3.35 00:10:36.844 Starting 4 threads 00:10:39.549 02:32:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:39.549 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3903488, buflen=4096 00:10:39.549 fio: pid=857886, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:39.549 02:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:39.808 02:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:39.808 02:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:39.808 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=11010048, buflen=4096 00:10:39.808 fio: pid=857885, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:40.068 02:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.068 02:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:40.068 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=55480320, buflen=4096 00:10:40.068 fio: pid=857881, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:40.328 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=331776, buflen=4096 00:10:40.328 fio: pid=857884, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:40.328 02:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.328 02:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:40.328 00:10:40.328 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857881: Mon Dec 16 02:32:10 2024 00:10:40.328 read: IOPS=4285, BW=16.7MiB/s (17.6MB/s)(52.9MiB/3161msec) 00:10:40.328 slat (usec): min=6, max=28765, avg=10.77, stdev=255.60 00:10:40.328 clat (usec): min=152, max=41848, avg=219.66, stdev=1051.46 00:10:40.328 lat (usec): min=160, max=41871, avg=230.43, stdev=1082.56 00:10:40.328 clat percentiles (usec): 00:10:40.328 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:10:40.328 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:10:40.328 | 70.00th=[ 196], 80.00th=[ 206], 90.00th=[ 231], 95.00th=[ 249], 00:10:40.328 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 469], 99.95th=[40633], 00:10:40.328 | 99.99th=[41681] 00:10:40.328 bw ( KiB/s): min= 8253, max=21184, per=85.71%, avg=17783.50, stdev=4966.50, samples=6 00:10:40.328 iops : min= 2063, max= 5296, avg=4445.83, stdev=1241.72, samples=6 00:10:40.328 lat (usec) : 250=95.54%, 500=4.36%, 750=0.03% 00:10:40.328 lat (msec) : 50=0.07% 00:10:40.328 cpu : usr=1.23%, sys=4.15%, ctx=13551, majf=0, minf=1 00:10:40.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.328 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.328 issued rwts: total=13546,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.328 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857884: Mon Dec 16 02:32:10 2024 00:10:40.328 read: IOPS=24, BW=97.3KiB/s (99.7kB/s)(324KiB/3329msec) 00:10:40.328 slat (usec): min=10, max=13780, avg=341.51, stdev=1834.67 00:10:40.328 clat (usec): min=315, max=41958, avg=40487.96, stdev=4521.08 00:10:40.328 lat (usec): min=348, max=54967, avg=40833.44, stdev=4927.08 00:10:40.328 clat percentiles (usec): 00:10:40.328 | 1.00th=[ 314], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:40.328 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:40.328 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:40.328 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:40.328 | 99.99th=[42206] 00:10:40.328 bw ( KiB/s): min= 93, max= 104, per=0.47%, avg=98.17, stdev= 4.67, samples=6 00:10:40.328 iops : min= 23, max= 26, avg=24.50, stdev= 1.22, samples=6 00:10:40.328 lat (usec) : 500=1.22% 00:10:40.328 lat (msec) : 50=97.56% 00:10:40.328 cpu : usr=0.12%, sys=0.00%, ctx=85, majf=0, minf=2 00:10:40.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.328 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.328 issued rwts: total=82,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.328 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857885: Mon Dec 16 02:32:10 2024 00:10:40.328 read: IOPS=908, BW=3631KiB/s (3718kB/s)(10.5MiB/2961msec) 00:10:40.328 slat (usec): min=5, max=827, avg= 8.82, stdev=16.14 00:10:40.328 clat (usec): min=162, max=42027, avg=1083.15, stdev=5844.37 00:10:40.328 lat (usec): min=170, max=42051, avg=1091.97, stdev=5847.61 00:10:40.328 clat percentiles (usec): 00:10:40.328 | 1.00th=[ 178], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:10:40.328 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 217], 60.00th=[ 223], 00:10:40.328 | 70.00th=[ 229], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 269], 00:10:40.328 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:40.328 | 99.99th=[42206] 00:10:40.328 bw ( KiB/s): min= 104, max=12792, per=12.91%, avg=2678.40, stdev=5653.89, samples=5 00:10:40.328 iops : min= 26, max= 3198, avg=669.60, stdev=1413.47, samples=5 00:10:40.328 lat (usec) : 250=87.58%, 500=10.08%, 750=0.04% 00:10:40.328 lat (msec) : 2=0.04%, 4=0.07%, 10=0.04%, 50=2.12% 00:10:40.328 cpu : usr=0.24%, sys=0.91%, ctx=2690, majf=0, minf=2 00:10:40.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.328 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.328 issued rwts: total=2689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.328 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.328 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857886: Mon Dec 16 02:32:10 2024 00:10:40.328 read: IOPS=349, BW=1398KiB/s (1432kB/s)(3812KiB/2726msec) 00:10:40.328 slat (nsec): min=6340, max=30025, avg=8924.27, stdev=4239.18 00:10:40.328 clat (usec): min=187, max=42043, avg=2822.59, stdev=9921.98 00:10:40.328 lat (usec): min=194, max=42066, avg=2831.52, stdev=9924.55 00:10:40.328 clat percentiles (usec): 00:10:40.328 | 1.00th=[ 194], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 212], 00:10:40.328 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:10:40.328 | 70.00th=[ 241], 80.00th=[ 253], 90.00th=[ 281], 95.00th=[41157], 00:10:40.328 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:40.328 | 99.99th=[42206] 00:10:40.328 bw ( KiB/s): min= 96, max= 736, per=1.24%, avg=257.60, stdev=272.26, samples=5 00:10:40.328 iops : min= 24, max= 184, avg=64.40, stdev=68.06, samples=5 00:10:40.328 lat (usec) : 250=79.04%, 500=14.26% 00:10:40.328 lat (msec) : 4=0.10%, 10=0.10%, 20=0.10%, 50=6.29% 00:10:40.328 cpu : usr=0.11%, sys=0.33%, ctx=955, majf=0, minf=2 00:10:40.328 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.328 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.328 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.328 issued rwts: total=954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.329 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.329 00:10:40.329 Run status group 0 (all jobs): 00:10:40.329 READ: bw=20.3MiB/s (21.2MB/s), 97.3KiB/s-16.7MiB/s (99.7kB/s-17.6MB/s), io=67.4MiB (70.7MB), run=2726-3329msec 00:10:40.329 00:10:40.329 Disk stats (read/write): 00:10:40.329 nvme0n1: ios=13566/0, merge=0/0, ticks=3313/0, in_queue=3313, util=97.94% 00:10:40.329 nvme0n2: ios=76/0, merge=0/0, ticks=3076/0, in_queue=3076, util=95.67% 00:10:40.329 nvme0n3: ios=2685/0, merge=0/0, ticks=2748/0, in_queue=2748, util=95.84% 00:10:40.329 nvme0n4: ios=613/0, merge=0/0, ticks=3019/0, in_queue=3019, util=98.81% 00:10:40.329 02:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.329 02:32:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:40.588 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.588 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:40.847 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:40.847 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:41.106 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:41.106 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:41.365 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:41.365 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 857583 00:10:41.365 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:41.365 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:41.366 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.366 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:41.366 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:41.366 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:41.366 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.366 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:41.366 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:41.366 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:41.366 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:41.366 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:41.366 nvmf hotplug test: fio failed as expected 00:10:41.366 02:32:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:41.625 rmmod nvme_tcp 00:10:41.625 rmmod nvme_fabrics 00:10:41.625 rmmod nvme_keyring 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 854877 ']' 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 854877 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 854877 ']' 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 854877 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:41.625 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 854877 00:10:41.626 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:41.626 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:41.626 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 854877' 00:10:41.626 killing process with pid 854877 00:10:41.626 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 854877 00:10:41.626 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 854877 00:10:41.885 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:41.885 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:41.885 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:41.885 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:41.885 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:41.885 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:41.886 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:41.886 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:41.886 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:41.886 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.886 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.886 02:32:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:44.424 00:10:44.424 real 0m27.000s 00:10:44.424 user 1m47.484s 00:10:44.424 sys 0m8.424s 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.424 ************************************ 00:10:44.424 END TEST nvmf_fio_target 00:10:44.424 ************************************ 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:44.424 ************************************ 00:10:44.424 START TEST nvmf_bdevio 00:10:44.424 ************************************ 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:44.424 * Looking for test storage... 00:10:44.424 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:44.424 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:44.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.425 --rc genhtml_branch_coverage=1 00:10:44.425 --rc genhtml_function_coverage=1 00:10:44.425 --rc genhtml_legend=1 00:10:44.425 --rc geninfo_all_blocks=1 00:10:44.425 --rc geninfo_unexecuted_blocks=1 00:10:44.425 00:10:44.425 ' 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:44.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.425 --rc genhtml_branch_coverage=1 00:10:44.425 --rc genhtml_function_coverage=1 00:10:44.425 --rc genhtml_legend=1 00:10:44.425 --rc geninfo_all_blocks=1 00:10:44.425 --rc geninfo_unexecuted_blocks=1 00:10:44.425 00:10:44.425 ' 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:44.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.425 --rc genhtml_branch_coverage=1 00:10:44.425 --rc genhtml_function_coverage=1 00:10:44.425 --rc genhtml_legend=1 00:10:44.425 --rc geninfo_all_blocks=1 00:10:44.425 --rc geninfo_unexecuted_blocks=1 00:10:44.425 00:10:44.425 ' 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:44.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:44.425 --rc genhtml_branch_coverage=1 00:10:44.425 --rc genhtml_function_coverage=1 00:10:44.425 --rc genhtml_legend=1 00:10:44.425 --rc geninfo_all_blocks=1 00:10:44.425 --rc geninfo_unexecuted_blocks=1 00:10:44.425 00:10:44.425 ' 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.425 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:44.426 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:44.426 02:32:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.000 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:51.001 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:51.001 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:51.001 Found net devices under 0000:af:00.0: cvl_0_0 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:51.001 Found net devices under 0000:af:00.1: cvl_0_1 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:51.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:10:51.001 00:10:51.001 --- 10.0.0.2 ping statistics --- 00:10:51.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.001 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:10:51.001 00:10:51.001 --- 10.0.0.1 ping statistics --- 00:10:51.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.001 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=862095 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 862095 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 862095 ']' 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:51.001 02:32:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.001 [2024-12-16 02:32:20.824799] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:51.001 [2024-12-16 02:32:20.824858] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:51.001 [2024-12-16 02:32:20.904935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:51.001 [2024-12-16 02:32:20.928164] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:51.001 [2024-12-16 02:32:20.928202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:51.001 [2024-12-16 02:32:20.928209] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:51.001 [2024-12-16 02:32:20.928214] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:51.002 [2024-12-16 02:32:20.928219] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:51.002 [2024-12-16 02:32:20.932867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:51.002 [2024-12-16 02:32:20.932942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:51.002 [2024-12-16 02:32:20.933060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:51.002 [2024-12-16 02:32:20.933060] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.002 [2024-12-16 02:32:21.069095] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.002 Malloc0 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.002 [2024-12-16 02:32:21.130309] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:51.002 { 00:10:51.002 "params": { 00:10:51.002 "name": "Nvme$subsystem", 00:10:51.002 "trtype": "$TEST_TRANSPORT", 00:10:51.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:51.002 "adrfam": "ipv4", 00:10:51.002 "trsvcid": "$NVMF_PORT", 00:10:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:51.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:51.002 "hdgst": ${hdgst:-false}, 00:10:51.002 "ddgst": ${ddgst:-false} 00:10:51.002 }, 00:10:51.002 "method": "bdev_nvme_attach_controller" 00:10:51.002 } 00:10:51.002 EOF 00:10:51.002 )") 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:51.002 02:32:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:51.002 "params": { 00:10:51.002 "name": "Nvme1", 00:10:51.002 "trtype": "tcp", 00:10:51.002 "traddr": "10.0.0.2", 00:10:51.002 "adrfam": "ipv4", 00:10:51.002 "trsvcid": "4420", 00:10:51.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:51.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:51.002 "hdgst": false, 00:10:51.002 "ddgst": false 00:10:51.002 }, 00:10:51.002 "method": "bdev_nvme_attach_controller" 00:10:51.002 }' 00:10:51.002 [2024-12-16 02:32:21.178284] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:51.002 [2024-12-16 02:32:21.178325] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid862298 ] 00:10:51.002 [2024-12-16 02:32:21.253030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:51.002 [2024-12-16 02:32:21.278094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:51.002 [2024-12-16 02:32:21.278201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.002 [2024-12-16 02:32:21.278202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:51.002 I/O targets: 00:10:51.002 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:51.002 00:10:51.002 00:10:51.002 CUnit - A unit testing framework for C - Version 2.1-3 00:10:51.002 http://cunit.sourceforge.net/ 00:10:51.002 00:10:51.002 00:10:51.002 Suite: bdevio tests on: Nvme1n1 00:10:51.002 Test: blockdev write read block ...passed 00:10:51.002 Test: blockdev write zeroes read block ...passed 00:10:51.002 Test: blockdev write zeroes read no split ...passed 00:10:51.002 Test: blockdev write zeroes read split ...passed 00:10:51.002 Test: blockdev write zeroes read split partial ...passed 00:10:51.002 Test: blockdev reset ...[2024-12-16 02:32:21.583796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:51.002 [2024-12-16 02:32:21.583863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22b2340 (9): Bad file descriptor 00:10:51.002 [2024-12-16 02:32:21.637994] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:51.002 passed 00:10:51.261 Test: blockdev write read 8 blocks ...passed 00:10:51.261 Test: blockdev write read size > 128k ...passed 00:10:51.261 Test: blockdev write read invalid size ...passed 00:10:51.261 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:51.261 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:51.261 Test: blockdev write read max offset ...passed 00:10:51.261 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:51.261 Test: blockdev writev readv 8 blocks ...passed 00:10:51.261 Test: blockdev writev readv 30 x 1block ...passed 00:10:51.261 Test: blockdev writev readv block ...passed 00:10:51.261 Test: blockdev writev readv size > 128k ...passed 00:10:51.261 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:51.261 Test: blockdev comparev and writev ...[2024-12-16 02:32:21.848571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.261 [2024-12-16 02:32:21.848605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:51.261 [2024-12-16 02:32:21.848619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.261 [2024-12-16 02:32:21.848627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:51.261 [2024-12-16 02:32:21.848876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.261 [2024-12-16 02:32:21.848887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:51.261 [2024-12-16 02:32:21.848898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.261 [2024-12-16 02:32:21.848905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:51.261 [2024-12-16 02:32:21.849131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.261 [2024-12-16 02:32:21.849141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:51.262 [2024-12-16 02:32:21.849153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.262 [2024-12-16 02:32:21.849163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:51.262 [2024-12-16 02:32:21.849393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.262 [2024-12-16 02:32:21.849403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:51.262 [2024-12-16 02:32:21.849415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:51.262 [2024-12-16 02:32:21.849421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:51.262 passed 00:10:51.521 Test: blockdev nvme passthru rw ...passed 00:10:51.521 Test: blockdev nvme passthru vendor specific ...[2024-12-16 02:32:21.931185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:51.521 [2024-12-16 02:32:21.931205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:51.521 [2024-12-16 02:32:21.931305] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:51.521 [2024-12-16 02:32:21.931315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:51.521 [2024-12-16 02:32:21.931418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:51.521 [2024-12-16 02:32:21.931427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:51.521 [2024-12-16 02:32:21.931525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:51.521 [2024-12-16 02:32:21.931534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:51.521 passed 00:10:51.521 Test: blockdev nvme admin passthru ...passed 00:10:51.521 Test: blockdev copy ...passed 00:10:51.521 00:10:51.521 Run Summary: Type Total Ran Passed Failed Inactive 00:10:51.521 suites 1 1 n/a 0 0 00:10:51.521 tests 23 23 23 0 0 00:10:51.521 asserts 152 152 152 0 n/a 00:10:51.521 00:10:51.521 Elapsed time = 1.041 seconds 00:10:51.521 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:51.521 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.521 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:51.521 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.521 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:51.521 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:51.521 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:51.521 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:51.521 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.521 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:51.521 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.521 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.521 rmmod nvme_tcp 00:10:51.521 rmmod nvme_fabrics 00:10:51.521 rmmod nvme_keyring 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 862095 ']' 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 862095 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 862095 ']' 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 862095 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 862095 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 862095' 00:10:51.780 killing process with pid 862095 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 862095 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 862095 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.780 02:32:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:54.315 00:10:54.315 real 0m9.956s 00:10:54.315 user 0m9.645s 00:10:54.315 sys 0m5.026s 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:54.315 ************************************ 00:10:54.315 END TEST nvmf_bdevio 00:10:54.315 ************************************ 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:54.315 00:10:54.315 real 4m34.508s 00:10:54.315 user 10m23.702s 00:10:54.315 sys 1m38.617s 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.315 ************************************ 00:10:54.315 END TEST nvmf_target_core 00:10:54.315 ************************************ 00:10:54.315 02:32:24 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:54.315 02:32:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.315 02:32:24 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.315 02:32:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:54.315 ************************************ 00:10:54.315 START TEST nvmf_target_extra 00:10:54.315 ************************************ 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:54.315 * Looking for test storage... 00:10:54.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:54.315 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:54.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.316 --rc genhtml_branch_coverage=1 00:10:54.316 --rc genhtml_function_coverage=1 00:10:54.316 --rc genhtml_legend=1 00:10:54.316 --rc geninfo_all_blocks=1 00:10:54.316 --rc geninfo_unexecuted_blocks=1 00:10:54.316 00:10:54.316 ' 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:54.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.316 --rc genhtml_branch_coverage=1 00:10:54.316 --rc genhtml_function_coverage=1 00:10:54.316 --rc genhtml_legend=1 00:10:54.316 --rc geninfo_all_blocks=1 00:10:54.316 --rc geninfo_unexecuted_blocks=1 00:10:54.316 00:10:54.316 ' 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:54.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.316 --rc genhtml_branch_coverage=1 00:10:54.316 --rc genhtml_function_coverage=1 00:10:54.316 --rc genhtml_legend=1 00:10:54.316 --rc geninfo_all_blocks=1 00:10:54.316 --rc geninfo_unexecuted_blocks=1 00:10:54.316 00:10:54.316 ' 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:54.316 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.316 --rc genhtml_branch_coverage=1 00:10:54.316 --rc genhtml_function_coverage=1 00:10:54.316 --rc genhtml_legend=1 00:10:54.316 --rc geninfo_all_blocks=1 00:10:54.316 --rc geninfo_unexecuted_blocks=1 00:10:54.316 00:10:54.316 ' 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.316 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:54.316 02:32:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:54.317 02:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.317 02:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.317 02:32:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:54.317 ************************************ 00:10:54.317 START TEST nvmf_example 00:10:54.317 ************************************ 00:10:54.317 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:54.317 * Looking for test storage... 00:10:54.317 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:54.317 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:54.317 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:54.317 02:32:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:54.581 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:54.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.582 --rc genhtml_branch_coverage=1 00:10:54.582 --rc genhtml_function_coverage=1 00:10:54.582 --rc genhtml_legend=1 00:10:54.582 --rc geninfo_all_blocks=1 00:10:54.582 --rc geninfo_unexecuted_blocks=1 00:10:54.582 00:10:54.582 ' 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:54.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.582 --rc genhtml_branch_coverage=1 00:10:54.582 --rc genhtml_function_coverage=1 00:10:54.582 --rc genhtml_legend=1 00:10:54.582 --rc geninfo_all_blocks=1 00:10:54.582 --rc geninfo_unexecuted_blocks=1 00:10:54.582 00:10:54.582 ' 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:54.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.582 --rc genhtml_branch_coverage=1 00:10:54.582 --rc genhtml_function_coverage=1 00:10:54.582 --rc genhtml_legend=1 00:10:54.582 --rc geninfo_all_blocks=1 00:10:54.582 --rc geninfo_unexecuted_blocks=1 00:10:54.582 00:10:54.582 ' 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:54.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:54.582 --rc genhtml_branch_coverage=1 00:10:54.582 --rc genhtml_function_coverage=1 00:10:54.582 --rc genhtml_legend=1 00:10:54.582 --rc geninfo_all_blocks=1 00:10:54.582 --rc geninfo_unexecuted_blocks=1 00:10:54.582 00:10:54.582 ' 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.582 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:54.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.583 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.584 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.584 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:54.584 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:54.584 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:54.584 02:32:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:01.157 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:01.157 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:01.157 Found net devices under 0000:af:00.0: cvl_0_0 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:01.157 Found net devices under 0000:af:00.1: cvl_0_1 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:01.157 02:32:30 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:01.157 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:01.157 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:01.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:01.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.420 ms 00:11:01.158 00:11:01.158 --- 10.0.0.2 ping statistics --- 00:11:01.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.158 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:01.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:01.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:11:01.158 00:11:01.158 --- 10.0.0.1 ping statistics --- 00:11:01.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:01.158 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=866059 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 866059 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 866059 ']' 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:01.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.158 02:32:31 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:01.726 02:32:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:13.936 Initializing NVMe Controllers 00:11:13.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:13.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:13.936 Initialization complete. Launching workers. 00:11:13.936 ======================================================== 00:11:13.936 Latency(us) 00:11:13.936 Device Information : IOPS MiB/s Average min max 00:11:13.936 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18224.23 71.19 3511.42 520.12 15553.80 00:11:13.936 ======================================================== 00:11:13.936 Total : 18224.23 71.19 3511.42 520.12 15553.80 00:11:13.936 00:11:13.936 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:13.936 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:13.936 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:13.936 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:13.936 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:13.936 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:13.936 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:13.936 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:13.936 rmmod nvme_tcp 00:11:13.936 rmmod nvme_fabrics 00:11:13.936 rmmod nvme_keyring 00:11:13.936 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:13.936 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 866059 ']' 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 866059 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 866059 ']' 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 866059 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 866059 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 866059' 00:11:13.937 killing process with pid 866059 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 866059 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 866059 00:11:13.937 nvmf threads initialize successfully 00:11:13.937 bdev subsystem init successfully 00:11:13.937 created a nvmf target service 00:11:13.937 create targets's poll groups done 00:11:13.937 all subsystems of target started 00:11:13.937 nvmf target is running 00:11:13.937 all subsystems of target stopped 00:11:13.937 destroy targets's poll groups done 00:11:13.937 destroyed the nvmf target service 00:11:13.937 bdev subsystem finish successfully 00:11:13.937 nvmf threads destroy successfully 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.937 02:32:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.196 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:14.196 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:14.196 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.196 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.455 00:11:14.456 real 0m20.012s 00:11:14.456 user 0m46.355s 00:11:14.456 sys 0m6.081s 00:11:14.456 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:14.456 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:14.456 ************************************ 00:11:14.456 END TEST nvmf_example 00:11:14.456 ************************************ 00:11:14.456 02:32:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:14.456 02:32:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:14.456 02:32:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:14.456 02:32:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:14.456 ************************************ 00:11:14.456 START TEST nvmf_filesystem 00:11:14.456 ************************************ 00:11:14.456 02:32:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:14.456 * Looking for test storage... 00:11:14.456 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:14.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.456 --rc genhtml_branch_coverage=1 00:11:14.456 --rc genhtml_function_coverage=1 00:11:14.456 --rc genhtml_legend=1 00:11:14.456 --rc geninfo_all_blocks=1 00:11:14.456 --rc geninfo_unexecuted_blocks=1 00:11:14.456 00:11:14.456 ' 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:14.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.456 --rc genhtml_branch_coverage=1 00:11:14.456 --rc genhtml_function_coverage=1 00:11:14.456 --rc genhtml_legend=1 00:11:14.456 --rc geninfo_all_blocks=1 00:11:14.456 --rc geninfo_unexecuted_blocks=1 00:11:14.456 00:11:14.456 ' 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:14.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.456 --rc genhtml_branch_coverage=1 00:11:14.456 --rc genhtml_function_coverage=1 00:11:14.456 --rc genhtml_legend=1 00:11:14.456 --rc geninfo_all_blocks=1 00:11:14.456 --rc geninfo_unexecuted_blocks=1 00:11:14.456 00:11:14.456 ' 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:14.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.456 --rc genhtml_branch_coverage=1 00:11:14.456 --rc genhtml_function_coverage=1 00:11:14.456 --rc genhtml_legend=1 00:11:14.456 --rc geninfo_all_blocks=1 00:11:14.456 --rc geninfo_unexecuted_blocks=1 00:11:14.456 00:11:14.456 ' 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:14.456 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:14.716 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:14.717 #define SPDK_CONFIG_H 00:11:14.717 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:14.717 #define SPDK_CONFIG_APPS 1 00:11:14.717 #define SPDK_CONFIG_ARCH native 00:11:14.717 #undef SPDK_CONFIG_ASAN 00:11:14.717 #undef SPDK_CONFIG_AVAHI 00:11:14.717 #undef SPDK_CONFIG_CET 00:11:14.717 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:14.717 #define SPDK_CONFIG_COVERAGE 1 00:11:14.717 #define SPDK_CONFIG_CROSS_PREFIX 00:11:14.717 #undef SPDK_CONFIG_CRYPTO 00:11:14.717 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:14.717 #undef SPDK_CONFIG_CUSTOMOCF 00:11:14.717 #undef SPDK_CONFIG_DAOS 00:11:14.717 #define SPDK_CONFIG_DAOS_DIR 00:11:14.717 #define SPDK_CONFIG_DEBUG 1 00:11:14.717 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:14.717 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:14.717 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:14.717 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.717 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:14.717 #undef SPDK_CONFIG_DPDK_UADK 00:11:14.717 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:14.717 #define SPDK_CONFIG_EXAMPLES 1 00:11:14.717 #undef SPDK_CONFIG_FC 00:11:14.717 #define SPDK_CONFIG_FC_PATH 00:11:14.717 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:14.717 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:14.717 #define SPDK_CONFIG_FSDEV 1 00:11:14.717 #undef SPDK_CONFIG_FUSE 00:11:14.717 #undef SPDK_CONFIG_FUZZER 00:11:14.717 #define SPDK_CONFIG_FUZZER_LIB 00:11:14.717 #undef SPDK_CONFIG_GOLANG 00:11:14.717 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:14.717 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:14.717 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:14.717 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:14.717 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:14.717 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:14.717 #undef SPDK_CONFIG_HAVE_LZ4 00:11:14.717 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:14.717 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:14.717 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:14.717 #define SPDK_CONFIG_IDXD 1 00:11:14.717 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:14.717 #undef SPDK_CONFIG_IPSEC_MB 00:11:14.717 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:14.717 #define SPDK_CONFIG_ISAL 1 00:11:14.717 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:14.717 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:14.717 #define SPDK_CONFIG_LIBDIR 00:11:14.717 #undef SPDK_CONFIG_LTO 00:11:14.717 #define SPDK_CONFIG_MAX_LCORES 128 00:11:14.717 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:14.717 #define SPDK_CONFIG_NVME_CUSE 1 00:11:14.717 #undef SPDK_CONFIG_OCF 00:11:14.717 #define SPDK_CONFIG_OCF_PATH 00:11:14.717 #define SPDK_CONFIG_OPENSSL_PATH 00:11:14.717 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:14.717 #define SPDK_CONFIG_PGO_DIR 00:11:14.717 #undef SPDK_CONFIG_PGO_USE 00:11:14.717 #define SPDK_CONFIG_PREFIX /usr/local 00:11:14.717 #undef SPDK_CONFIG_RAID5F 00:11:14.717 #undef SPDK_CONFIG_RBD 00:11:14.717 #define SPDK_CONFIG_RDMA 1 00:11:14.717 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:14.717 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:14.717 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:14.717 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:14.717 #define SPDK_CONFIG_SHARED 1 00:11:14.717 #undef SPDK_CONFIG_SMA 00:11:14.717 #define SPDK_CONFIG_TESTS 1 00:11:14.717 #undef SPDK_CONFIG_TSAN 00:11:14.717 #define SPDK_CONFIG_UBLK 1 00:11:14.717 #define SPDK_CONFIG_UBSAN 1 00:11:14.717 #undef SPDK_CONFIG_UNIT_TESTS 00:11:14.717 #undef SPDK_CONFIG_URING 00:11:14.717 #define SPDK_CONFIG_URING_PATH 00:11:14.717 #undef SPDK_CONFIG_URING_ZNS 00:11:14.717 #undef SPDK_CONFIG_USDT 00:11:14.717 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:14.717 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:14.717 #define SPDK_CONFIG_VFIO_USER 1 00:11:14.717 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:14.717 #define SPDK_CONFIG_VHOST 1 00:11:14.717 #define SPDK_CONFIG_VIRTIO 1 00:11:14.717 #undef SPDK_CONFIG_VTUNE 00:11:14.717 #define SPDK_CONFIG_VTUNE_DIR 00:11:14.717 #define SPDK_CONFIG_WERROR 1 00:11:14.717 #define SPDK_CONFIG_WPDK_DIR 00:11:14.717 #undef SPDK_CONFIG_XNVME 00:11:14.717 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:14.717 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.718 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 868415 ]] 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 868415 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.TyA7Id 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.TyA7Id/tests/target /tmp/spdk.TyA7Id 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:14.719 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88111398912 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552417792 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7441018880 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766175744 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776206848 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087470592 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110486016 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23015424 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47776002048 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776210944 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=208896 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:14.720 * Looking for test storage... 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88111398912 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9655611392 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:14.720 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:14.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.721 --rc genhtml_branch_coverage=1 00:11:14.721 --rc genhtml_function_coverage=1 00:11:14.721 --rc genhtml_legend=1 00:11:14.721 --rc geninfo_all_blocks=1 00:11:14.721 --rc geninfo_unexecuted_blocks=1 00:11:14.721 00:11:14.721 ' 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:14.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.721 --rc genhtml_branch_coverage=1 00:11:14.721 --rc genhtml_function_coverage=1 00:11:14.721 --rc genhtml_legend=1 00:11:14.721 --rc geninfo_all_blocks=1 00:11:14.721 --rc geninfo_unexecuted_blocks=1 00:11:14.721 00:11:14.721 ' 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:14.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.721 --rc genhtml_branch_coverage=1 00:11:14.721 --rc genhtml_function_coverage=1 00:11:14.721 --rc genhtml_legend=1 00:11:14.721 --rc geninfo_all_blocks=1 00:11:14.721 --rc geninfo_unexecuted_blocks=1 00:11:14.721 00:11:14.721 ' 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:14.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:14.721 --rc genhtml_branch_coverage=1 00:11:14.721 --rc genhtml_function_coverage=1 00:11:14.721 --rc genhtml_legend=1 00:11:14.721 --rc geninfo_all_blocks=1 00:11:14.721 --rc geninfo_unexecuted_blocks=1 00:11:14.721 00:11:14.721 ' 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:14.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.721 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:14.980 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:14.980 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:14.980 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.980 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.980 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.980 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:14.980 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:14.980 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:14.980 02:32:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:21.551 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:21.551 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:21.551 Found net devices under 0000:af:00.0: cvl_0_0 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:21.551 Found net devices under 0000:af:00.1: cvl_0_1 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:21.551 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:21.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:11:21.551 00:11:21.551 --- 10.0.0.2 ping statistics --- 00:11:21.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.551 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:11:21.552 00:11:21.552 --- 10.0.0.1 ping statistics --- 00:11:21.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.552 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:21.552 ************************************ 00:11:21.552 START TEST nvmf_filesystem_no_in_capsule 00:11:21.552 ************************************ 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=871580 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 871580 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 871580 ']' 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.552 [2024-12-16 02:32:51.415275] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:21.552 [2024-12-16 02:32:51.415320] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.552 [2024-12-16 02:32:51.495868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.552 [2024-12-16 02:32:51.518489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.552 [2024-12-16 02:32:51.518530] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.552 [2024-12-16 02:32:51.518537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.552 [2024-12-16 02:32:51.518543] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.552 [2024-12-16 02:32:51.518548] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.552 [2024-12-16 02:32:51.519875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.552 [2024-12-16 02:32:51.519962] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.552 [2024-12-16 02:32:51.520014] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:21.552 [2024-12-16 02:32:51.520016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.552 [2024-12-16 02:32:51.659817] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.552 Malloc1 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.552 [2024-12-16 02:32:51.823002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.552 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:21.552 { 00:11:21.552 "name": "Malloc1", 00:11:21.552 "aliases": [ 00:11:21.552 "4dcf6c71-44df-479c-9925-669da0851c5d" 00:11:21.552 ], 00:11:21.552 "product_name": "Malloc disk", 00:11:21.552 "block_size": 512, 00:11:21.552 "num_blocks": 1048576, 00:11:21.552 "uuid": "4dcf6c71-44df-479c-9925-669da0851c5d", 00:11:21.552 "assigned_rate_limits": { 00:11:21.552 "rw_ios_per_sec": 0, 00:11:21.552 "rw_mbytes_per_sec": 0, 00:11:21.552 "r_mbytes_per_sec": 0, 00:11:21.552 "w_mbytes_per_sec": 0 00:11:21.552 }, 00:11:21.552 "claimed": true, 00:11:21.552 "claim_type": "exclusive_write", 00:11:21.552 "zoned": false, 00:11:21.552 "supported_io_types": { 00:11:21.552 "read": true, 00:11:21.552 "write": true, 00:11:21.552 "unmap": true, 00:11:21.552 "flush": true, 00:11:21.552 "reset": true, 00:11:21.553 "nvme_admin": false, 00:11:21.553 "nvme_io": false, 00:11:21.553 "nvme_io_md": false, 00:11:21.553 "write_zeroes": true, 00:11:21.553 "zcopy": true, 00:11:21.553 "get_zone_info": false, 00:11:21.553 "zone_management": false, 00:11:21.553 "zone_append": false, 00:11:21.553 "compare": false, 00:11:21.553 "compare_and_write": false, 00:11:21.553 "abort": true, 00:11:21.553 "seek_hole": false, 00:11:21.553 "seek_data": false, 00:11:21.553 "copy": true, 00:11:21.553 "nvme_iov_md": false 00:11:21.553 }, 00:11:21.553 "memory_domains": [ 00:11:21.553 { 00:11:21.553 "dma_device_id": "system", 00:11:21.553 "dma_device_type": 1 00:11:21.553 }, 00:11:21.553 { 00:11:21.553 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.553 "dma_device_type": 2 00:11:21.553 } 00:11:21.553 ], 00:11:21.553 "driver_specific": {} 00:11:21.553 } 00:11:21.553 ]' 00:11:21.553 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:21.553 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:21.553 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:21.553 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:21.553 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:21.553 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:21.553 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:21.553 02:32:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.489 02:32:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:22.489 02:32:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:22.489 02:32:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.489 02:32:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:22.489 02:32:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:25.021 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:25.021 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.021 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:25.021 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:25.022 02:32:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:25.589 02:32:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.525 ************************************ 00:11:26.525 START TEST filesystem_ext4 00:11:26.525 ************************************ 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:26.525 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:26.525 mke2fs 1.47.0 (5-Feb-2023) 00:11:26.784 Discarding device blocks: 0/522240 done 00:11:26.784 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:26.784 Filesystem UUID: 9153bcfb-0750-4bec-b232-9e89708d9522 00:11:26.784 Superblock backups stored on blocks: 00:11:26.784 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:26.784 00:11:26.784 Allocating group tables: 0/64 done 00:11:26.784 Writing inode tables: 0/64 done 00:11:26.784 Creating journal (8192 blocks): done 00:11:26.784 Writing superblocks and filesystem accounting information: 0/64 done 00:11:26.784 00:11:26.784 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:26.784 02:32:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 871580 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:33.349 00:11:33.349 real 0m6.228s 00:11:33.349 user 0m0.026s 00:11:33.349 sys 0m0.072s 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:33.349 ************************************ 00:11:33.349 END TEST filesystem_ext4 00:11:33.349 ************************************ 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:33.349 ************************************ 00:11:33.349 START TEST filesystem_btrfs 00:11:33.349 ************************************ 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:33.349 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:33.350 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:33.350 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:33.350 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:33.350 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:33.350 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:33.350 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:33.350 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:33.350 btrfs-progs v6.8.1 00:11:33.350 See https://btrfs.readthedocs.io for more information. 00:11:33.350 00:11:33.350 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:33.350 NOTE: several default settings have changed in version 5.15, please make sure 00:11:33.350 this does not affect your deployments: 00:11:33.350 - DUP for metadata (-m dup) 00:11:33.350 - enabled no-holes (-O no-holes) 00:11:33.350 - enabled free-space-tree (-R free-space-tree) 00:11:33.350 00:11:33.350 Label: (null) 00:11:33.350 UUID: ea68dc49-470d-4014-b484-4c871e48b442 00:11:33.350 Node size: 16384 00:11:33.350 Sector size: 4096 (CPU page size: 4096) 00:11:33.350 Filesystem size: 510.00MiB 00:11:33.350 Block group profiles: 00:11:33.350 Data: single 8.00MiB 00:11:33.350 Metadata: DUP 32.00MiB 00:11:33.350 System: DUP 8.00MiB 00:11:33.350 SSD detected: yes 00:11:33.350 Zoned device: no 00:11:33.350 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:33.350 Checksum: crc32c 00:11:33.350 Number of devices: 1 00:11:33.350 Devices: 00:11:33.350 ID SIZE PATH 00:11:33.350 1 510.00MiB /dev/nvme0n1p1 00:11:33.350 00:11:33.350 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:33.350 02:33:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:33.918 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:33.918 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:33.918 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:33.918 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:33.918 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:33.918 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:33.918 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 871580 00:11:33.918 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:33.918 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:33.918 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:33.918 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:33.918 00:11:33.918 real 0m1.120s 00:11:33.918 user 0m0.026s 00:11:33.918 sys 0m0.115s 00:11:33.918 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.918 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:33.918 ************************************ 00:11:33.918 END TEST filesystem_btrfs 00:11:33.918 ************************************ 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.177 ************************************ 00:11:34.177 START TEST filesystem_xfs 00:11:34.177 ************************************ 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:34.177 02:33:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:34.177 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:34.177 = sectsz=512 attr=2, projid32bit=1 00:11:34.177 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:34.177 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:34.177 data = bsize=4096 blocks=130560, imaxpct=25 00:11:34.177 = sunit=0 swidth=0 blks 00:11:34.177 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:34.177 log =internal log bsize=4096 blocks=16384, version=2 00:11:34.177 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:34.177 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:35.162 Discarding blocks...Done. 00:11:35.162 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:35.162 02:33:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:37.159 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:37.159 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:37.159 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:37.159 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:37.159 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:37.159 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:37.159 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 871580 00:11:37.159 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:37.160 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:37.160 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:37.160 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:37.160 00:11:37.160 real 0m3.169s 00:11:37.160 user 0m0.027s 00:11:37.160 sys 0m0.072s 00:11:37.160 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:37.160 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:37.160 ************************************ 00:11:37.160 END TEST filesystem_xfs 00:11:37.160 ************************************ 00:11:37.419 02:33:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:37.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 871580 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 871580 ']' 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 871580 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 871580 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 871580' 00:11:37.678 killing process with pid 871580 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 871580 00:11:37.678 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 871580 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:38.247 00:11:38.247 real 0m17.249s 00:11:38.247 user 1m7.954s 00:11:38.247 sys 0m1.377s 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.247 ************************************ 00:11:38.247 END TEST nvmf_filesystem_no_in_capsule 00:11:38.247 ************************************ 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.247 ************************************ 00:11:38.247 START TEST nvmf_filesystem_in_capsule 00:11:38.247 ************************************ 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=875067 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 875067 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 875067 ']' 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:38.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:38.247 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.247 [2024-12-16 02:33:08.729134] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:38.247 [2024-12-16 02:33:08.729172] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:38.247 [2024-12-16 02:33:08.806172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:38.247 [2024-12-16 02:33:08.828947] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:38.247 [2024-12-16 02:33:08.828985] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:38.247 [2024-12-16 02:33:08.828996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:38.247 [2024-12-16 02:33:08.829002] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:38.247 [2024-12-16 02:33:08.829007] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:38.247 [2024-12-16 02:33:08.830355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:38.247 [2024-12-16 02:33:08.830393] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:38.247 [2024-12-16 02:33:08.830499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:38.247 [2024-12-16 02:33:08.830500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.507 [2024-12-16 02:33:08.954462] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.507 02:33:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.507 Malloc1 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.507 [2024-12-16 02:33:09.122031] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:38.507 { 00:11:38.507 "name": "Malloc1", 00:11:38.507 "aliases": [ 00:11:38.507 "5f24e261-fcfa-4706-8e54-a5e9a4aeffed" 00:11:38.507 ], 00:11:38.507 "product_name": "Malloc disk", 00:11:38.507 "block_size": 512, 00:11:38.507 "num_blocks": 1048576, 00:11:38.507 "uuid": "5f24e261-fcfa-4706-8e54-a5e9a4aeffed", 00:11:38.507 "assigned_rate_limits": { 00:11:38.507 "rw_ios_per_sec": 0, 00:11:38.507 "rw_mbytes_per_sec": 0, 00:11:38.507 "r_mbytes_per_sec": 0, 00:11:38.507 "w_mbytes_per_sec": 0 00:11:38.507 }, 00:11:38.507 "claimed": true, 00:11:38.507 "claim_type": "exclusive_write", 00:11:38.507 "zoned": false, 00:11:38.507 "supported_io_types": { 00:11:38.507 "read": true, 00:11:38.507 "write": true, 00:11:38.507 "unmap": true, 00:11:38.507 "flush": true, 00:11:38.507 "reset": true, 00:11:38.507 "nvme_admin": false, 00:11:38.507 "nvme_io": false, 00:11:38.507 "nvme_io_md": false, 00:11:38.507 "write_zeroes": true, 00:11:38.507 "zcopy": true, 00:11:38.507 "get_zone_info": false, 00:11:38.507 "zone_management": false, 00:11:38.507 "zone_append": false, 00:11:38.507 "compare": false, 00:11:38.507 "compare_and_write": false, 00:11:38.507 "abort": true, 00:11:38.507 "seek_hole": false, 00:11:38.507 "seek_data": false, 00:11:38.507 "copy": true, 00:11:38.507 "nvme_iov_md": false 00:11:38.507 }, 00:11:38.507 "memory_domains": [ 00:11:38.507 { 00:11:38.507 "dma_device_id": "system", 00:11:38.507 "dma_device_type": 1 00:11:38.507 }, 00:11:38.507 { 00:11:38.507 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.507 "dma_device_type": 2 00:11:38.507 } 00:11:38.507 ], 00:11:38.507 "driver_specific": {} 00:11:38.507 } 00:11:38.507 ]' 00:11:38.507 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:38.767 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:38.767 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:38.767 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:38.767 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:38.767 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:38.767 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:38.767 02:33:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:40.145 02:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:40.145 02:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:40.145 02:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:40.145 02:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:40.145 02:33:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:42.051 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:42.310 02:33:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:42.877 02:33:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.256 ************************************ 00:11:44.256 START TEST filesystem_in_capsule_ext4 00:11:44.256 ************************************ 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:44.256 02:33:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:44.256 mke2fs 1.47.0 (5-Feb-2023) 00:11:44.256 Discarding device blocks: 0/522240 done 00:11:44.256 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:44.256 Filesystem UUID: d17d1639-fbdc-45fb-936b-e5c4bfa6df25 00:11:44.256 Superblock backups stored on blocks: 00:11:44.256 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:44.256 00:11:44.256 Allocating group tables: 0/64 done 00:11:44.256 Writing inode tables: 0/64 done 00:11:44.256 Creating journal (8192 blocks): done 00:11:46.570 Writing superblocks and filesystem accounting information: 0/64 done 00:11:46.570 00:11:46.570 02:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:46.570 02:33:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 875067 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:51.842 00:11:51.842 real 0m7.918s 00:11:51.842 user 0m0.031s 00:11:51.842 sys 0m0.072s 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:51.842 ************************************ 00:11:51.842 END TEST filesystem_in_capsule_ext4 00:11:51.842 ************************************ 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.842 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.102 ************************************ 00:11:52.102 START TEST filesystem_in_capsule_btrfs 00:11:52.102 ************************************ 00:11:52.102 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:52.102 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:52.102 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:52.102 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:52.102 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:52.102 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:52.102 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:52.102 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:52.102 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:52.102 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:52.102 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:52.102 btrfs-progs v6.8.1 00:11:52.102 See https://btrfs.readthedocs.io for more information. 00:11:52.102 00:11:52.102 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:52.102 NOTE: several default settings have changed in version 5.15, please make sure 00:11:52.102 this does not affect your deployments: 00:11:52.102 - DUP for metadata (-m dup) 00:11:52.102 - enabled no-holes (-O no-holes) 00:11:52.102 - enabled free-space-tree (-R free-space-tree) 00:11:52.102 00:11:52.102 Label: (null) 00:11:52.102 UUID: a9f0dade-f8b9-4ec5-be29-45ff4c26b73e 00:11:52.102 Node size: 16384 00:11:52.102 Sector size: 4096 (CPU page size: 4096) 00:11:52.102 Filesystem size: 510.00MiB 00:11:52.102 Block group profiles: 00:11:52.102 Data: single 8.00MiB 00:11:52.102 Metadata: DUP 32.00MiB 00:11:52.102 System: DUP 8.00MiB 00:11:52.102 SSD detected: yes 00:11:52.102 Zoned device: no 00:11:52.102 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:52.102 Checksum: crc32c 00:11:52.102 Number of devices: 1 00:11:52.102 Devices: 00:11:52.102 ID SIZE PATH 00:11:52.102 1 510.00MiB /dev/nvme0n1p1 00:11:52.102 00:11:52.102 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:52.102 02:33:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 875067 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.039 00:11:53.039 real 0m1.038s 00:11:53.039 user 0m0.023s 00:11:53.039 sys 0m0.120s 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:53.039 ************************************ 00:11:53.039 END TEST filesystem_in_capsule_btrfs 00:11:53.039 ************************************ 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.039 ************************************ 00:11:53.039 START TEST filesystem_in_capsule_xfs 00:11:53.039 ************************************ 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:53.039 02:33:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:53.298 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:53.298 = sectsz=512 attr=2, projid32bit=1 00:11:53.298 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:53.298 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:53.298 data = bsize=4096 blocks=130560, imaxpct=25 00:11:53.298 = sunit=0 swidth=0 blks 00:11:53.298 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:53.298 log =internal log bsize=4096 blocks=16384, version=2 00:11:53.298 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:53.298 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:54.235 Discarding blocks...Done. 00:11:54.235 02:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:54.235 02:33:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:56.140 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:56.140 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:56.140 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:56.140 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 875067 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.141 00:11:56.141 real 0m2.964s 00:11:56.141 user 0m0.025s 00:11:56.141 sys 0m0.075s 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:56.141 ************************************ 00:11:56.141 END TEST filesystem_in_capsule_xfs 00:11:56.141 ************************************ 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:56.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:56.141 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 875067 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 875067 ']' 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 875067 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 875067 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 875067' 00:11:56.400 killing process with pid 875067 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 875067 00:11:56.400 02:33:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 875067 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:56.660 00:11:56.660 real 0m18.519s 00:11:56.660 user 1m12.973s 00:11:56.660 sys 0m1.457s 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.660 ************************************ 00:11:56.660 END TEST nvmf_filesystem_in_capsule 00:11:56.660 ************************************ 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:56.660 rmmod nvme_tcp 00:11:56.660 rmmod nvme_fabrics 00:11:56.660 rmmod nvme_keyring 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.660 02:33:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:59.198 00:11:59.198 real 0m44.438s 00:11:59.198 user 2m22.983s 00:11:59.198 sys 0m7.485s 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:59.198 ************************************ 00:11:59.198 END TEST nvmf_filesystem 00:11:59.198 ************************************ 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:59.198 ************************************ 00:11:59.198 START TEST nvmf_target_discovery 00:11:59.198 ************************************ 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:59.198 * Looking for test storage... 00:11:59.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:59.198 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:59.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.199 --rc genhtml_branch_coverage=1 00:11:59.199 --rc genhtml_function_coverage=1 00:11:59.199 --rc genhtml_legend=1 00:11:59.199 --rc geninfo_all_blocks=1 00:11:59.199 --rc geninfo_unexecuted_blocks=1 00:11:59.199 00:11:59.199 ' 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:59.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.199 --rc genhtml_branch_coverage=1 00:11:59.199 --rc genhtml_function_coverage=1 00:11:59.199 --rc genhtml_legend=1 00:11:59.199 --rc geninfo_all_blocks=1 00:11:59.199 --rc geninfo_unexecuted_blocks=1 00:11:59.199 00:11:59.199 ' 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:59.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.199 --rc genhtml_branch_coverage=1 00:11:59.199 --rc genhtml_function_coverage=1 00:11:59.199 --rc genhtml_legend=1 00:11:59.199 --rc geninfo_all_blocks=1 00:11:59.199 --rc geninfo_unexecuted_blocks=1 00:11:59.199 00:11:59.199 ' 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:59.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:59.199 --rc genhtml_branch_coverage=1 00:11:59.199 --rc genhtml_function_coverage=1 00:11:59.199 --rc genhtml_legend=1 00:11:59.199 --rc geninfo_all_blocks=1 00:11:59.199 --rc geninfo_unexecuted_blocks=1 00:11:59.199 00:11:59.199 ' 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:59.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:59.199 02:33:29 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.774 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.774 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:05.774 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:05.774 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:05.774 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:05.774 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:05.774 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:05.774 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:05.774 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:05.774 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:05.774 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:05.774 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:05.775 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:05.775 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:05.775 Found net devices under 0000:af:00.0: cvl_0_0 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:05.775 Found net devices under 0000:af:00.1: cvl_0_1 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:05.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:12:05.775 00:12:05.775 --- 10.0.0.2 ping statistics --- 00:12:05.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.775 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:12:05.775 00:12:05.775 --- 10.0.0.1 ping statistics --- 00:12:05.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.775 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:05.775 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=881871 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 881871 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 881871 ']' 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 [2024-12-16 02:33:35.753658] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:05.776 [2024-12-16 02:33:35.753707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.776 [2024-12-16 02:33:35.833548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.776 [2024-12-16 02:33:35.857417] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.776 [2024-12-16 02:33:35.857456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.776 [2024-12-16 02:33:35.857463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.776 [2024-12-16 02:33:35.857469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.776 [2024-12-16 02:33:35.857474] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.776 [2024-12-16 02:33:35.858930] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.776 [2024-12-16 02:33:35.859036] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.776 [2024-12-16 02:33:35.859145] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.776 [2024-12-16 02:33:35.859146] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 [2024-12-16 02:33:35.995936] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 Null1 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 [2024-12-16 02:33:36.048997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 Null2 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 Null3 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 Null4 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.776 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:05.777 00:12:05.777 Discovery Log Number of Records 6, Generation counter 6 00:12:05.777 =====Discovery Log Entry 0====== 00:12:05.777 trtype: tcp 00:12:05.777 adrfam: ipv4 00:12:05.777 subtype: current discovery subsystem 00:12:05.777 treq: not required 00:12:05.777 portid: 0 00:12:05.777 trsvcid: 4420 00:12:05.777 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:05.777 traddr: 10.0.0.2 00:12:05.777 eflags: explicit discovery connections, duplicate discovery information 00:12:05.777 sectype: none 00:12:05.777 =====Discovery Log Entry 1====== 00:12:05.777 trtype: tcp 00:12:05.777 adrfam: ipv4 00:12:05.777 subtype: nvme subsystem 00:12:05.777 treq: not required 00:12:05.777 portid: 0 00:12:05.777 trsvcid: 4420 00:12:05.777 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:05.777 traddr: 10.0.0.2 00:12:05.777 eflags: none 00:12:05.777 sectype: none 00:12:05.777 =====Discovery Log Entry 2====== 00:12:05.777 trtype: tcp 00:12:05.777 adrfam: ipv4 00:12:05.777 subtype: nvme subsystem 00:12:05.777 treq: not required 00:12:05.777 portid: 0 00:12:05.777 trsvcid: 4420 00:12:05.777 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:05.777 traddr: 10.0.0.2 00:12:05.777 eflags: none 00:12:05.777 sectype: none 00:12:05.777 =====Discovery Log Entry 3====== 00:12:05.777 trtype: tcp 00:12:05.777 adrfam: ipv4 00:12:05.777 subtype: nvme subsystem 00:12:05.777 treq: not required 00:12:05.777 portid: 0 00:12:05.777 trsvcid: 4420 00:12:05.777 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:05.777 traddr: 10.0.0.2 00:12:05.777 eflags: none 00:12:05.777 sectype: none 00:12:05.777 =====Discovery Log Entry 4====== 00:12:05.777 trtype: tcp 00:12:05.777 adrfam: ipv4 00:12:05.777 subtype: nvme subsystem 00:12:05.777 treq: not required 00:12:05.777 portid: 0 00:12:05.777 trsvcid: 4420 00:12:05.777 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:05.777 traddr: 10.0.0.2 00:12:05.777 eflags: none 00:12:05.777 sectype: none 00:12:05.777 =====Discovery Log Entry 5====== 00:12:05.777 trtype: tcp 00:12:05.777 adrfam: ipv4 00:12:05.777 subtype: discovery subsystem referral 00:12:05.777 treq: not required 00:12:05.777 portid: 0 00:12:05.777 trsvcid: 4430 00:12:05.777 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:05.777 traddr: 10.0.0.2 00:12:05.777 eflags: none 00:12:05.777 sectype: none 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:05.777 Perform nvmf subsystem discovery via RPC 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.777 [ 00:12:05.777 { 00:12:05.777 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:05.777 "subtype": "Discovery", 00:12:05.777 "listen_addresses": [ 00:12:05.777 { 00:12:05.777 "trtype": "TCP", 00:12:05.777 "adrfam": "IPv4", 00:12:05.777 "traddr": "10.0.0.2", 00:12:05.777 "trsvcid": "4420" 00:12:05.777 } 00:12:05.777 ], 00:12:05.777 "allow_any_host": true, 00:12:05.777 "hosts": [] 00:12:05.777 }, 00:12:05.777 { 00:12:05.777 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.777 "subtype": "NVMe", 00:12:05.777 "listen_addresses": [ 00:12:05.777 { 00:12:05.777 "trtype": "TCP", 00:12:05.777 "adrfam": "IPv4", 00:12:05.777 "traddr": "10.0.0.2", 00:12:05.777 "trsvcid": "4420" 00:12:05.777 } 00:12:05.777 ], 00:12:05.777 "allow_any_host": true, 00:12:05.777 "hosts": [], 00:12:05.777 "serial_number": "SPDK00000000000001", 00:12:05.777 "model_number": "SPDK bdev Controller", 00:12:05.777 "max_namespaces": 32, 00:12:05.777 "min_cntlid": 1, 00:12:05.777 "max_cntlid": 65519, 00:12:05.777 "namespaces": [ 00:12:05.777 { 00:12:05.777 "nsid": 1, 00:12:05.777 "bdev_name": "Null1", 00:12:05.777 "name": "Null1", 00:12:05.777 "nguid": "E0DD1402727040FD9131544819B063FB", 00:12:05.777 "uuid": "e0dd1402-7270-40fd-9131-544819b063fb" 00:12:05.777 } 00:12:05.777 ] 00:12:05.777 }, 00:12:05.777 { 00:12:05.777 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:05.777 "subtype": "NVMe", 00:12:05.777 "listen_addresses": [ 00:12:05.777 { 00:12:05.777 "trtype": "TCP", 00:12:05.777 "adrfam": "IPv4", 00:12:05.777 "traddr": "10.0.0.2", 00:12:05.777 "trsvcid": "4420" 00:12:05.777 } 00:12:05.777 ], 00:12:05.777 "allow_any_host": true, 00:12:05.777 "hosts": [], 00:12:05.777 "serial_number": "SPDK00000000000002", 00:12:05.777 "model_number": "SPDK bdev Controller", 00:12:05.777 "max_namespaces": 32, 00:12:05.777 "min_cntlid": 1, 00:12:05.777 "max_cntlid": 65519, 00:12:05.777 "namespaces": [ 00:12:05.777 { 00:12:05.777 "nsid": 1, 00:12:05.777 "bdev_name": "Null2", 00:12:05.777 "name": "Null2", 00:12:05.777 "nguid": "7204B34CDD5D4E2CA451F305385B59D7", 00:12:05.777 "uuid": "7204b34c-dd5d-4e2c-a451-f305385b59d7" 00:12:05.777 } 00:12:05.777 ] 00:12:05.777 }, 00:12:05.777 { 00:12:05.777 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:05.777 "subtype": "NVMe", 00:12:05.777 "listen_addresses": [ 00:12:05.777 { 00:12:05.777 "trtype": "TCP", 00:12:05.777 "adrfam": "IPv4", 00:12:05.777 "traddr": "10.0.0.2", 00:12:05.777 "trsvcid": "4420" 00:12:05.777 } 00:12:05.777 ], 00:12:05.777 "allow_any_host": true, 00:12:05.777 "hosts": [], 00:12:05.777 "serial_number": "SPDK00000000000003", 00:12:05.777 "model_number": "SPDK bdev Controller", 00:12:05.777 "max_namespaces": 32, 00:12:05.777 "min_cntlid": 1, 00:12:05.777 "max_cntlid": 65519, 00:12:05.777 "namespaces": [ 00:12:05.777 { 00:12:05.777 "nsid": 1, 00:12:05.777 "bdev_name": "Null3", 00:12:05.777 "name": "Null3", 00:12:05.777 "nguid": "1E5D8C4D86A74A868927152AD4FFC323", 00:12:05.777 "uuid": "1e5d8c4d-86a7-4a86-8927-152ad4ffc323" 00:12:05.777 } 00:12:05.777 ] 00:12:05.777 }, 00:12:05.777 { 00:12:05.777 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:05.777 "subtype": "NVMe", 00:12:05.777 "listen_addresses": [ 00:12:05.777 { 00:12:05.777 "trtype": "TCP", 00:12:05.777 "adrfam": "IPv4", 00:12:05.777 "traddr": "10.0.0.2", 00:12:05.777 "trsvcid": "4420" 00:12:05.777 } 00:12:05.777 ], 00:12:05.777 "allow_any_host": true, 00:12:05.777 "hosts": [], 00:12:05.777 "serial_number": "SPDK00000000000004", 00:12:05.777 "model_number": "SPDK bdev Controller", 00:12:05.777 "max_namespaces": 32, 00:12:05.777 "min_cntlid": 1, 00:12:05.777 "max_cntlid": 65519, 00:12:05.777 "namespaces": [ 00:12:05.777 { 00:12:05.777 "nsid": 1, 00:12:05.777 "bdev_name": "Null4", 00:12:05.777 "name": "Null4", 00:12:05.777 "nguid": "71294CFAA47A490AA34912B51B6A3A65", 00:12:05.777 "uuid": "71294cfa-a47a-490a-a349-12b51b6a3a65" 00:12:05.777 } 00:12:05.777 ] 00:12:05.777 } 00:12:05.777 ] 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.777 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:05.778 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:06.037 rmmod nvme_tcp 00:12:06.037 rmmod nvme_fabrics 00:12:06.037 rmmod nvme_keyring 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 881871 ']' 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 881871 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 881871 ']' 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 881871 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 881871 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 881871' 00:12:06.037 killing process with pid 881871 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 881871 00:12:06.037 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 881871 00:12:06.297 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:06.297 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:06.297 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:06.297 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:06.297 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:06.297 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:06.297 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:06.297 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:06.297 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:06.297 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.297 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.297 02:33:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.203 02:33:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:08.203 00:12:08.203 real 0m9.366s 00:12:08.203 user 0m5.446s 00:12:08.203 sys 0m4.852s 00:12:08.203 02:33:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.203 02:33:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.203 ************************************ 00:12:08.203 END TEST nvmf_target_discovery 00:12:08.203 ************************************ 00:12:08.203 02:33:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:08.203 02:33:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:08.203 02:33:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.203 02:33:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:08.463 ************************************ 00:12:08.463 START TEST nvmf_referrals 00:12:08.463 ************************************ 00:12:08.463 02:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:08.463 * Looking for test storage... 00:12:08.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:08.463 02:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:08.463 02:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:08.463 02:33:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:08.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.463 --rc genhtml_branch_coverage=1 00:12:08.463 --rc genhtml_function_coverage=1 00:12:08.463 --rc genhtml_legend=1 00:12:08.463 --rc geninfo_all_blocks=1 00:12:08.463 --rc geninfo_unexecuted_blocks=1 00:12:08.463 00:12:08.463 ' 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:08.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.463 --rc genhtml_branch_coverage=1 00:12:08.463 --rc genhtml_function_coverage=1 00:12:08.463 --rc genhtml_legend=1 00:12:08.463 --rc geninfo_all_blocks=1 00:12:08.463 --rc geninfo_unexecuted_blocks=1 00:12:08.463 00:12:08.463 ' 00:12:08.463 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:08.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.463 --rc genhtml_branch_coverage=1 00:12:08.463 --rc genhtml_function_coverage=1 00:12:08.463 --rc genhtml_legend=1 00:12:08.463 --rc geninfo_all_blocks=1 00:12:08.464 --rc geninfo_unexecuted_blocks=1 00:12:08.464 00:12:08.464 ' 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:08.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:08.464 --rc genhtml_branch_coverage=1 00:12:08.464 --rc genhtml_function_coverage=1 00:12:08.464 --rc genhtml_legend=1 00:12:08.464 --rc geninfo_all_blocks=1 00:12:08.464 --rc geninfo_unexecuted_blocks=1 00:12:08.464 00:12:08.464 ' 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:08.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:08.464 02:33:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:15.036 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:15.036 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.036 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:15.037 Found net devices under 0000:af:00.0: cvl_0_0 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:15.037 Found net devices under 0000:af:00.1: cvl_0_1 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:15.037 02:33:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:15.037 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.037 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:12:15.037 00:12:15.037 --- 10.0.0.2 ping statistics --- 00:12:15.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.037 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.037 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.037 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:12:15.037 00:12:15.037 --- 10.0.0.1 ping statistics --- 00:12:15.037 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.037 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=885419 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 885419 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 885419 ']' 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.037 [2024-12-16 02:33:45.113082] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:15.037 [2024-12-16 02:33:45.113135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.037 [2024-12-16 02:33:45.194104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.037 [2024-12-16 02:33:45.217055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.037 [2024-12-16 02:33:45.217092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.037 [2024-12-16 02:33:45.217099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.037 [2024-12-16 02:33:45.217106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.037 [2024-12-16 02:33:45.217114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.037 [2024-12-16 02:33:45.218564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.037 [2024-12-16 02:33:45.218674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.037 [2024-12-16 02:33:45.218778] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.037 [2024-12-16 02:33:45.218780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.037 [2024-12-16 02:33:45.346327] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.037 [2024-12-16 02:33:45.370992] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.037 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.038 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.296 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:15.296 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.297 02:33:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:15.556 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:15.814 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:15.814 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:15.815 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:15.815 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:15.815 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:15.815 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.815 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:15.815 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:15.815 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:15.815 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:15.815 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:15.815 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:15.815 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:16.073 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:16.073 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:16.073 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.073 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.073 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.073 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:16.073 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:16.073 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.073 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:16.073 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.073 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:16.073 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.074 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.074 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:16.074 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:16.074 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:16.074 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:16.074 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:16.074 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.074 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:16.074 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:16.333 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:16.333 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:16.333 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:16.333 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:16.333 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:16.333 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.333 02:33:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:16.592 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:16.852 rmmod nvme_tcp 00:12:16.852 rmmod nvme_fabrics 00:12:16.852 rmmod nvme_keyring 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 885419 ']' 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 885419 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 885419 ']' 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 885419 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.852 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 885419 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 885419' 00:12:17.115 killing process with pid 885419 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 885419 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 885419 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.115 02:33:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:19.651 00:12:19.651 real 0m10.874s 00:12:19.651 user 0m12.395s 00:12:19.651 sys 0m5.191s 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:19.651 ************************************ 00:12:19.651 END TEST nvmf_referrals 00:12:19.651 ************************************ 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.651 ************************************ 00:12:19.651 START TEST nvmf_connect_disconnect 00:12:19.651 ************************************ 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:19.651 * Looking for test storage... 00:12:19.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.651 02:33:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:19.651 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:19.651 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.651 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:19.651 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.651 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.651 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.651 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:19.651 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.651 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:19.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.652 --rc genhtml_branch_coverage=1 00:12:19.652 --rc genhtml_function_coverage=1 00:12:19.652 --rc genhtml_legend=1 00:12:19.652 --rc geninfo_all_blocks=1 00:12:19.652 --rc geninfo_unexecuted_blocks=1 00:12:19.652 00:12:19.652 ' 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:19.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.652 --rc genhtml_branch_coverage=1 00:12:19.652 --rc genhtml_function_coverage=1 00:12:19.652 --rc genhtml_legend=1 00:12:19.652 --rc geninfo_all_blocks=1 00:12:19.652 --rc geninfo_unexecuted_blocks=1 00:12:19.652 00:12:19.652 ' 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:19.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.652 --rc genhtml_branch_coverage=1 00:12:19.652 --rc genhtml_function_coverage=1 00:12:19.652 --rc genhtml_legend=1 00:12:19.652 --rc geninfo_all_blocks=1 00:12:19.652 --rc geninfo_unexecuted_blocks=1 00:12:19.652 00:12:19.652 ' 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:19.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.652 --rc genhtml_branch_coverage=1 00:12:19.652 --rc genhtml_function_coverage=1 00:12:19.652 --rc genhtml_legend=1 00:12:19.652 --rc geninfo_all_blocks=1 00:12:19.652 --rc geninfo_unexecuted_blocks=1 00:12:19.652 00:12:19.652 ' 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:19.652 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:19.652 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:19.653 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:19.653 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.653 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:19.653 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:19.653 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:19.653 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.653 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.653 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.653 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:19.653 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:19.653 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:19.653 02:33:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.224 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:26.225 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:26.225 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:26.225 Found net devices under 0000:af:00.0: cvl_0_0 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:26.225 Found net devices under 0000:af:00.1: cvl_0_1 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:26.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:12:26.225 00:12:26.225 --- 10.0.0.2 ping statistics --- 00:12:26.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.225 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:12:26.225 00:12:26.225 --- 10.0.0.1 ping statistics --- 00:12:26.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.225 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:26.225 02:33:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:26.225 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:26.225 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.225 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.225 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.225 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=889383 00:12:26.225 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 889383 00:12:26.225 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.225 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 889383 ']' 00:12:26.225 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.225 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.225 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.226 [2024-12-16 02:33:56.080987] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:26.226 [2024-12-16 02:33:56.081031] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.226 [2024-12-16 02:33:56.155769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.226 [2024-12-16 02:33:56.178586] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.226 [2024-12-16 02:33:56.178623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.226 [2024-12-16 02:33:56.178631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.226 [2024-12-16 02:33:56.178638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.226 [2024-12-16 02:33:56.178643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.226 [2024-12-16 02:33:56.180115] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.226 [2024-12-16 02:33:56.180224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.226 [2024-12-16 02:33:56.180331] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.226 [2024-12-16 02:33:56.180332] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.226 [2024-12-16 02:33:56.324315] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:26.226 [2024-12-16 02:33:56.384557] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:26.226 02:33:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:28.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.202 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.737 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.642 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.711 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.242 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.678 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:05.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.065 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.615 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.154 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.616 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.653 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:07.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:14.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.036 [2024-12-16 02:35:54.296622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b46810 is same with the state(6) to be set 00:14:24.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.949 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.022 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:35.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:42.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.015 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:17.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.018 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:21.922 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:24.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.998 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:28.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:31.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:33.972 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:35.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:40.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:42.976 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.416 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:49.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:54.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:56.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:58.841 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:05.814 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.259 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:15.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:17.231 rmmod nvme_tcp 00:16:17.231 rmmod nvme_fabrics 00:16:17.231 rmmod nvme_keyring 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 889383 ']' 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 889383 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 889383 ']' 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 889383 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:17.231 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 889383 00:16:17.490 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:17.490 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:17.490 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 889383' 00:16:17.490 killing process with pid 889383 00:16:17.490 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 889383 00:16:17.490 02:37:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 889383 00:16:17.490 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:17.490 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:17.490 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:17.490 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:17.490 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:17.490 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:17.490 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:17.490 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:17.490 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:17.490 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.490 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.491 02:37:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.026 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:20.026 00:16:20.026 real 4m0.342s 00:16:20.026 user 15m18.242s 00:16:20.026 sys 0m24.754s 00:16:20.026 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:20.026 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:20.026 ************************************ 00:16:20.026 END TEST nvmf_connect_disconnect 00:16:20.026 ************************************ 00:16:20.026 02:37:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:20.026 02:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:20.026 02:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:20.026 02:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:20.026 ************************************ 00:16:20.026 START TEST nvmf_multitarget 00:16:20.026 ************************************ 00:16:20.026 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:20.026 * Looking for test storage... 00:16:20.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:20.026 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:20.026 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:20.026 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:20.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.027 --rc genhtml_branch_coverage=1 00:16:20.027 --rc genhtml_function_coverage=1 00:16:20.027 --rc genhtml_legend=1 00:16:20.027 --rc geninfo_all_blocks=1 00:16:20.027 --rc geninfo_unexecuted_blocks=1 00:16:20.027 00:16:20.027 ' 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:20.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.027 --rc genhtml_branch_coverage=1 00:16:20.027 --rc genhtml_function_coverage=1 00:16:20.027 --rc genhtml_legend=1 00:16:20.027 --rc geninfo_all_blocks=1 00:16:20.027 --rc geninfo_unexecuted_blocks=1 00:16:20.027 00:16:20.027 ' 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:20.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.027 --rc genhtml_branch_coverage=1 00:16:20.027 --rc genhtml_function_coverage=1 00:16:20.027 --rc genhtml_legend=1 00:16:20.027 --rc geninfo_all_blocks=1 00:16:20.027 --rc geninfo_unexecuted_blocks=1 00:16:20.027 00:16:20.027 ' 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:20.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.027 --rc genhtml_branch_coverage=1 00:16:20.027 --rc genhtml_function_coverage=1 00:16:20.027 --rc genhtml_legend=1 00:16:20.027 --rc geninfo_all_blocks=1 00:16:20.027 --rc geninfo_unexecuted_blocks=1 00:16:20.027 00:16:20.027 ' 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:20.027 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:20.027 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:20.028 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.028 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.028 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.028 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:20.028 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:20.028 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:20.028 02:37:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:26.599 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:26.599 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:26.599 Found net devices under 0000:af:00.0: cvl_0_0 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:26.599 Found net devices under 0000:af:00.1: cvl_0_1 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:26.599 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:26.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:26.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:16:26.600 00:16:26.600 --- 10.0.0.2 ping statistics --- 00:16:26.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.600 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:26.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:26.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:16:26.600 00:16:26.600 --- 10.0.0.1 ping statistics --- 00:16:26.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:26.600 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=932366 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 932366 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 932366 ']' 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:26.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:26.600 [2024-12-16 02:37:56.461653] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:26.600 [2024-12-16 02:37:56.461704] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:26.600 [2024-12-16 02:37:56.542788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.600 [2024-12-16 02:37:56.566420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.600 [2024-12-16 02:37:56.566459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.600 [2024-12-16 02:37:56.566466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.600 [2024-12-16 02:37:56.566472] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.600 [2024-12-16 02:37:56.566476] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.600 [2024-12-16 02:37:56.568013] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.600 [2024-12-16 02:37:56.568120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:26.600 [2024-12-16 02:37:56.568135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.600 [2024-12-16 02:37:56.568137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:26.600 "nvmf_tgt_1" 00:16:26.600 02:37:56 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:26.600 "nvmf_tgt_2" 00:16:26.600 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:26.600 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:26.600 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:26.600 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:26.600 true 00:16:26.600 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:26.859 true 00:16:26.859 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:26.859 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:26.859 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:26.859 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:26.859 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:26.859 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:26.859 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:26.859 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:26.859 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:26.859 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:26.859 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:26.859 rmmod nvme_tcp 00:16:26.859 rmmod nvme_fabrics 00:16:26.859 rmmod nvme_keyring 00:16:26.859 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:26.859 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 932366 ']' 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 932366 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 932366 ']' 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 932366 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 932366 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 932366' 00:16:27.118 killing process with pid 932366 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 932366 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 932366 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.118 02:37:57 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.654 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:29.654 00:16:29.654 real 0m9.552s 00:16:29.654 user 0m7.197s 00:16:29.654 sys 0m4.843s 00:16:29.654 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.654 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:29.654 ************************************ 00:16:29.654 END TEST nvmf_multitarget 00:16:29.654 ************************************ 00:16:29.654 02:37:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:29.654 02:37:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:29.654 02:37:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.654 02:37:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:29.654 ************************************ 00:16:29.654 START TEST nvmf_rpc 00:16:29.654 ************************************ 00:16:29.654 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:29.654 * Looking for test storage... 00:16:29.654 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:29.654 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:29.654 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:29.654 02:37:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:29.654 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:29.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.655 --rc genhtml_branch_coverage=1 00:16:29.655 --rc genhtml_function_coverage=1 00:16:29.655 --rc genhtml_legend=1 00:16:29.655 --rc geninfo_all_blocks=1 00:16:29.655 --rc geninfo_unexecuted_blocks=1 00:16:29.655 00:16:29.655 ' 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:29.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.655 --rc genhtml_branch_coverage=1 00:16:29.655 --rc genhtml_function_coverage=1 00:16:29.655 --rc genhtml_legend=1 00:16:29.655 --rc geninfo_all_blocks=1 00:16:29.655 --rc geninfo_unexecuted_blocks=1 00:16:29.655 00:16:29.655 ' 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:29.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.655 --rc genhtml_branch_coverage=1 00:16:29.655 --rc genhtml_function_coverage=1 00:16:29.655 --rc genhtml_legend=1 00:16:29.655 --rc geninfo_all_blocks=1 00:16:29.655 --rc geninfo_unexecuted_blocks=1 00:16:29.655 00:16:29.655 ' 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:29.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:29.655 --rc genhtml_branch_coverage=1 00:16:29.655 --rc genhtml_function_coverage=1 00:16:29.655 --rc genhtml_legend=1 00:16:29.655 --rc geninfo_all_blocks=1 00:16:29.655 --rc geninfo_unexecuted_blocks=1 00:16:29.655 00:16:29.655 ' 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:29.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:29.655 02:38:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.227 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:36.228 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:36.228 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:36.228 Found net devices under 0000:af:00.0: cvl_0_0 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:36.228 Found net devices under 0000:af:00.1: cvl_0_1 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:36.228 02:38:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:36.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:16:36.228 00:16:36.228 --- 10.0.0.2 ping statistics --- 00:16:36.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.228 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:16:36.228 00:16:36.228 --- 10.0.0.1 ping statistics --- 00:16:36.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.228 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=936032 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 936032 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 936032 ']' 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.228 [2024-12-16 02:38:06.163409] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:36.228 [2024-12-16 02:38:06.163456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.228 [2024-12-16 02:38:06.241788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:36.228 [2024-12-16 02:38:06.264871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.228 [2024-12-16 02:38:06.264909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.228 [2024-12-16 02:38:06.264916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.228 [2024-12-16 02:38:06.264922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.228 [2024-12-16 02:38:06.264928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.228 [2024-12-16 02:38:06.266304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:36.228 [2024-12-16 02:38:06.266414] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:36.228 [2024-12-16 02:38:06.266526] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.228 [2024-12-16 02:38:06.266527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:36.228 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:36.229 "tick_rate": 2100000000, 00:16:36.229 "poll_groups": [ 00:16:36.229 { 00:16:36.229 "name": "nvmf_tgt_poll_group_000", 00:16:36.229 "admin_qpairs": 0, 00:16:36.229 "io_qpairs": 0, 00:16:36.229 "current_admin_qpairs": 0, 00:16:36.229 "current_io_qpairs": 0, 00:16:36.229 "pending_bdev_io": 0, 00:16:36.229 "completed_nvme_io": 0, 00:16:36.229 "transports": [] 00:16:36.229 }, 00:16:36.229 { 00:16:36.229 "name": "nvmf_tgt_poll_group_001", 00:16:36.229 "admin_qpairs": 0, 00:16:36.229 "io_qpairs": 0, 00:16:36.229 "current_admin_qpairs": 0, 00:16:36.229 "current_io_qpairs": 0, 00:16:36.229 "pending_bdev_io": 0, 00:16:36.229 "completed_nvme_io": 0, 00:16:36.229 "transports": [] 00:16:36.229 }, 00:16:36.229 { 00:16:36.229 "name": "nvmf_tgt_poll_group_002", 00:16:36.229 "admin_qpairs": 0, 00:16:36.229 "io_qpairs": 0, 00:16:36.229 "current_admin_qpairs": 0, 00:16:36.229 "current_io_qpairs": 0, 00:16:36.229 "pending_bdev_io": 0, 00:16:36.229 "completed_nvme_io": 0, 00:16:36.229 "transports": [] 00:16:36.229 }, 00:16:36.229 { 00:16:36.229 "name": "nvmf_tgt_poll_group_003", 00:16:36.229 "admin_qpairs": 0, 00:16:36.229 "io_qpairs": 0, 00:16:36.229 "current_admin_qpairs": 0, 00:16:36.229 "current_io_qpairs": 0, 00:16:36.229 "pending_bdev_io": 0, 00:16:36.229 "completed_nvme_io": 0, 00:16:36.229 "transports": [] 00:16:36.229 } 00:16:36.229 ] 00:16:36.229 }' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.229 [2024-12-16 02:38:06.507753] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:36.229 "tick_rate": 2100000000, 00:16:36.229 "poll_groups": [ 00:16:36.229 { 00:16:36.229 "name": "nvmf_tgt_poll_group_000", 00:16:36.229 "admin_qpairs": 0, 00:16:36.229 "io_qpairs": 0, 00:16:36.229 "current_admin_qpairs": 0, 00:16:36.229 "current_io_qpairs": 0, 00:16:36.229 "pending_bdev_io": 0, 00:16:36.229 "completed_nvme_io": 0, 00:16:36.229 "transports": [ 00:16:36.229 { 00:16:36.229 "trtype": "TCP" 00:16:36.229 } 00:16:36.229 ] 00:16:36.229 }, 00:16:36.229 { 00:16:36.229 "name": "nvmf_tgt_poll_group_001", 00:16:36.229 "admin_qpairs": 0, 00:16:36.229 "io_qpairs": 0, 00:16:36.229 "current_admin_qpairs": 0, 00:16:36.229 "current_io_qpairs": 0, 00:16:36.229 "pending_bdev_io": 0, 00:16:36.229 "completed_nvme_io": 0, 00:16:36.229 "transports": [ 00:16:36.229 { 00:16:36.229 "trtype": "TCP" 00:16:36.229 } 00:16:36.229 ] 00:16:36.229 }, 00:16:36.229 { 00:16:36.229 "name": "nvmf_tgt_poll_group_002", 00:16:36.229 "admin_qpairs": 0, 00:16:36.229 "io_qpairs": 0, 00:16:36.229 "current_admin_qpairs": 0, 00:16:36.229 "current_io_qpairs": 0, 00:16:36.229 "pending_bdev_io": 0, 00:16:36.229 "completed_nvme_io": 0, 00:16:36.229 "transports": [ 00:16:36.229 { 00:16:36.229 "trtype": "TCP" 00:16:36.229 } 00:16:36.229 ] 00:16:36.229 }, 00:16:36.229 { 00:16:36.229 "name": "nvmf_tgt_poll_group_003", 00:16:36.229 "admin_qpairs": 0, 00:16:36.229 "io_qpairs": 0, 00:16:36.229 "current_admin_qpairs": 0, 00:16:36.229 "current_io_qpairs": 0, 00:16:36.229 "pending_bdev_io": 0, 00:16:36.229 "completed_nvme_io": 0, 00:16:36.229 "transports": [ 00:16:36.229 { 00:16:36.229 "trtype": "TCP" 00:16:36.229 } 00:16:36.229 ] 00:16:36.229 } 00:16:36.229 ] 00:16:36.229 }' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.229 Malloc1 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.229 [2024-12-16 02:38:06.675689] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:36.229 [2024-12-16 02:38:06.704342] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:36.229 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:36.229 could not add new controller: failed to write to nvme-fabrics device 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:36.229 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:36.230 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:36.230 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:36.230 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:36.230 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.230 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.230 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.230 02:38:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:37.607 02:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:37.607 02:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:37.607 02:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:37.607 02:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:37.607 02:38:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:39.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:39.512 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:39.513 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.513 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.513 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.513 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:39.513 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:39.513 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:39.513 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:39.513 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.513 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:39.513 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.513 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:39.513 02:38:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.513 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:39.513 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:39.513 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:39.513 [2024-12-16 02:38:10.026715] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:39.513 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:39.513 could not add new controller: failed to write to nvme-fabrics device 00:16:39.513 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:39.513 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.513 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.513 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.513 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:39.513 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.513 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.513 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.513 02:38:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.952 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:40.952 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:40.952 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.952 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:40.952 02:38:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:42.968 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:42.968 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:42.968 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:42.968 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:42.968 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.968 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:42.968 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:42.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:42.968 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:42.968 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:42.968 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:42.968 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.968 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.969 [2024-12-16 02:38:13.427747] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.969 02:38:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:43.904 02:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:43.904 02:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:43.905 02:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:43.905 02:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:43.905 02:38:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:46.438 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:46.438 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:46.438 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.438 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:46.438 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.438 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:46.438 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.438 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.438 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.438 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.438 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.438 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.438 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.439 [2024-12-16 02:38:16.678464] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.439 02:38:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.375 02:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:47.375 02:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:47.375 02:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.375 02:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:47.375 02:38:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:49.280 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:49.280 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:49.280 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.280 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:49.280 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.280 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:49.280 02:38:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.539 [2024-12-16 02:38:20.114554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.539 02:38:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.929 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:50.929 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:50.929 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:50.929 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:50.929 02:38:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:52.834 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.834 [2024-12-16 02:38:23.452374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.834 02:38:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:54.211 02:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:54.211 02:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:54.211 02:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:54.211 02:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:54.211 02:38:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:56.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:56.114 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:56.115 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.115 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.115 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.115 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:56.115 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.115 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.373 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.373 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:56.373 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:56.373 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.373 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.373 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.373 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:56.373 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.373 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.374 [2024-12-16 02:38:26.788303] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.374 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.374 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:56.374 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.374 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.374 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.374 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:56.374 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.374 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.374 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.374 02:38:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:57.311 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:57.311 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:57.311 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:57.311 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:57.311 02:38:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:59.847 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:59.847 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:59.847 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:59.847 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:59.847 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:59.847 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:59.847 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:59.847 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:59.847 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:59.847 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:59.847 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:59.847 02:38:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.847 [2024-12-16 02:38:30.069060] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.847 [2024-12-16 02:38:30.121127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:59.847 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 [2024-12-16 02:38:30.169269] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 [2024-12-16 02:38:30.217416] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 [2024-12-16 02:38:30.269578] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.848 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:59.848 "tick_rate": 2100000000, 00:16:59.848 "poll_groups": [ 00:16:59.848 { 00:16:59.848 "name": "nvmf_tgt_poll_group_000", 00:16:59.848 "admin_qpairs": 2, 00:16:59.848 "io_qpairs": 168, 00:16:59.848 "current_admin_qpairs": 0, 00:16:59.848 "current_io_qpairs": 0, 00:16:59.848 "pending_bdev_io": 0, 00:16:59.848 "completed_nvme_io": 220, 00:16:59.848 "transports": [ 00:16:59.848 { 00:16:59.848 "trtype": "TCP" 00:16:59.848 } 00:16:59.849 ] 00:16:59.849 }, 00:16:59.849 { 00:16:59.849 "name": "nvmf_tgt_poll_group_001", 00:16:59.849 "admin_qpairs": 2, 00:16:59.849 "io_qpairs": 168, 00:16:59.849 "current_admin_qpairs": 0, 00:16:59.849 "current_io_qpairs": 0, 00:16:59.849 "pending_bdev_io": 0, 00:16:59.849 "completed_nvme_io": 267, 00:16:59.849 "transports": [ 00:16:59.849 { 00:16:59.849 "trtype": "TCP" 00:16:59.849 } 00:16:59.849 ] 00:16:59.849 }, 00:16:59.849 { 00:16:59.849 "name": "nvmf_tgt_poll_group_002", 00:16:59.849 "admin_qpairs": 1, 00:16:59.849 "io_qpairs": 168, 00:16:59.849 "current_admin_qpairs": 0, 00:16:59.849 "current_io_qpairs": 0, 00:16:59.849 "pending_bdev_io": 0, 00:16:59.849 "completed_nvme_io": 267, 00:16:59.849 "transports": [ 00:16:59.849 { 00:16:59.849 "trtype": "TCP" 00:16:59.849 } 00:16:59.849 ] 00:16:59.849 }, 00:16:59.849 { 00:16:59.849 "name": "nvmf_tgt_poll_group_003", 00:16:59.849 "admin_qpairs": 2, 00:16:59.849 "io_qpairs": 168, 00:16:59.849 "current_admin_qpairs": 0, 00:16:59.849 "current_io_qpairs": 0, 00:16:59.849 "pending_bdev_io": 0, 00:16:59.849 "completed_nvme_io": 268, 00:16:59.849 "transports": [ 00:16:59.849 { 00:16:59.849 "trtype": "TCP" 00:16:59.849 } 00:16:59.849 ] 00:16:59.849 } 00:16:59.849 ] 00:16:59.849 }' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:59.849 rmmod nvme_tcp 00:16:59.849 rmmod nvme_fabrics 00:16:59.849 rmmod nvme_keyring 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 936032 ']' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 936032 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 936032 ']' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 936032 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.849 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 936032 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 936032' 00:17:00.108 killing process with pid 936032 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 936032 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 936032 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:00.108 02:38:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.646 02:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:02.646 00:17:02.646 real 0m32.907s 00:17:02.646 user 1m39.072s 00:17:02.646 sys 0m6.484s 00:17:02.646 02:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:02.646 02:38:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:02.646 ************************************ 00:17:02.646 END TEST nvmf_rpc 00:17:02.646 ************************************ 00:17:02.646 02:38:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:02.646 02:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:02.646 02:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:02.646 02:38:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:02.646 ************************************ 00:17:02.646 START TEST nvmf_invalid 00:17:02.646 ************************************ 00:17:02.646 02:38:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:02.646 * Looking for test storage... 00:17:02.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:02.646 02:38:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:02.646 02:38:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:02.646 02:38:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:02.646 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:02.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.647 --rc genhtml_branch_coverage=1 00:17:02.647 --rc genhtml_function_coverage=1 00:17:02.647 --rc genhtml_legend=1 00:17:02.647 --rc geninfo_all_blocks=1 00:17:02.647 --rc geninfo_unexecuted_blocks=1 00:17:02.647 00:17:02.647 ' 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:02.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.647 --rc genhtml_branch_coverage=1 00:17:02.647 --rc genhtml_function_coverage=1 00:17:02.647 --rc genhtml_legend=1 00:17:02.647 --rc geninfo_all_blocks=1 00:17:02.647 --rc geninfo_unexecuted_blocks=1 00:17:02.647 00:17:02.647 ' 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:02.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.647 --rc genhtml_branch_coverage=1 00:17:02.647 --rc genhtml_function_coverage=1 00:17:02.647 --rc genhtml_legend=1 00:17:02.647 --rc geninfo_all_blocks=1 00:17:02.647 --rc geninfo_unexecuted_blocks=1 00:17:02.647 00:17:02.647 ' 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:02.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:02.647 --rc genhtml_branch_coverage=1 00:17:02.647 --rc genhtml_function_coverage=1 00:17:02.647 --rc genhtml_legend=1 00:17:02.647 --rc geninfo_all_blocks=1 00:17:02.647 --rc geninfo_unexecuted_blocks=1 00:17:02.647 00:17:02.647 ' 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:02.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:02.647 02:38:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:09.220 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:09.221 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:09.221 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:09.221 Found net devices under 0000:af:00.0: cvl_0_0 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:09.221 Found net devices under 0000:af:00.1: cvl_0_1 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:09.221 02:38:38 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:09.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:09.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.458 ms 00:17:09.221 00:17:09.221 --- 10.0.0.2 ping statistics --- 00:17:09.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.221 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:09.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:09.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:17:09.221 00:17:09.221 --- 10.0.0.1 ping statistics --- 00:17:09.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:09.221 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=943625 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 943625 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 943625 ']' 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.221 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:09.222 [2024-12-16 02:38:39.110953] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:09.222 [2024-12-16 02:38:39.110998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.222 [2024-12-16 02:38:39.190875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:09.222 [2024-12-16 02:38:39.213471] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.222 [2024-12-16 02:38:39.213508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.222 [2024-12-16 02:38:39.213515] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.222 [2024-12-16 02:38:39.213521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.222 [2024-12-16 02:38:39.213526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.222 [2024-12-16 02:38:39.214935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.222 [2024-12-16 02:38:39.215047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.222 [2024-12-16 02:38:39.215077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.222 [2024-12-16 02:38:39.215079] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20969 00:17:09.222 [2024-12-16 02:38:39.520132] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:09.222 { 00:17:09.222 "nqn": "nqn.2016-06.io.spdk:cnode20969", 00:17:09.222 "tgt_name": "foobar", 00:17:09.222 "method": "nvmf_create_subsystem", 00:17:09.222 "req_id": 1 00:17:09.222 } 00:17:09.222 Got JSON-RPC error response 00:17:09.222 response: 00:17:09.222 { 00:17:09.222 "code": -32603, 00:17:09.222 "message": "Unable to find target foobar" 00:17:09.222 }' 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:09.222 { 00:17:09.222 "nqn": "nqn.2016-06.io.spdk:cnode20969", 00:17:09.222 "tgt_name": "foobar", 00:17:09.222 "method": "nvmf_create_subsystem", 00:17:09.222 "req_id": 1 00:17:09.222 } 00:17:09.222 Got JSON-RPC error response 00:17:09.222 response: 00:17:09.222 { 00:17:09.222 "code": -32603, 00:17:09.222 "message": "Unable to find target foobar" 00:17:09.222 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6977 00:17:09.222 [2024-12-16 02:38:39.736867] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6977: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:09.222 { 00:17:09.222 "nqn": "nqn.2016-06.io.spdk:cnode6977", 00:17:09.222 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:09.222 "method": "nvmf_create_subsystem", 00:17:09.222 "req_id": 1 00:17:09.222 } 00:17:09.222 Got JSON-RPC error response 00:17:09.222 response: 00:17:09.222 { 00:17:09.222 "code": -32602, 00:17:09.222 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:09.222 }' 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:09.222 { 00:17:09.222 "nqn": "nqn.2016-06.io.spdk:cnode6977", 00:17:09.222 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:09.222 "method": "nvmf_create_subsystem", 00:17:09.222 "req_id": 1 00:17:09.222 } 00:17:09.222 Got JSON-RPC error response 00:17:09.222 response: 00:17:09.222 { 00:17:09.222 "code": -32602, 00:17:09.222 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:09.222 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:09.222 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode24155 00:17:09.482 [2024-12-16 02:38:39.941549] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24155: invalid model number 'SPDK_Controller' 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:09.482 { 00:17:09.482 "nqn": "nqn.2016-06.io.spdk:cnode24155", 00:17:09.482 "model_number": "SPDK_Controller\u001f", 00:17:09.482 "method": "nvmf_create_subsystem", 00:17:09.482 "req_id": 1 00:17:09.482 } 00:17:09.482 Got JSON-RPC error response 00:17:09.482 response: 00:17:09.482 { 00:17:09.482 "code": -32602, 00:17:09.482 "message": "Invalid MN SPDK_Controller\u001f" 00:17:09.482 }' 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:09.482 { 00:17:09.482 "nqn": "nqn.2016-06.io.spdk:cnode24155", 00:17:09.482 "model_number": "SPDK_Controller\u001f", 00:17:09.482 "method": "nvmf_create_subsystem", 00:17:09.482 "req_id": 1 00:17:09.482 } 00:17:09.482 Got JSON-RPC error response 00:17:09.482 response: 00:17:09.482 { 00:17:09.482 "code": -32602, 00:17:09.482 "message": "Invalid MN SPDK_Controller\u001f" 00:17:09.482 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.482 02:38:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.482 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ u == \- ]] 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'uk2!/{y"#U00nzQ0Dp^%' 00:17:09.483 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'uk2!/{y"#U00nzQ0Dp^%' nqn.2016-06.io.spdk:cnode2308 00:17:09.742 [2024-12-16 02:38:40.286731] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2308: invalid serial number 'uk2!/{y"#U00nzQ0Dp^%' 00:17:09.742 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:09.742 { 00:17:09.743 "nqn": "nqn.2016-06.io.spdk:cnode2308", 00:17:09.743 "serial_number": "uk2!/{y\"#U00nz\u007fQ0Dp^%", 00:17:09.743 "method": "nvmf_create_subsystem", 00:17:09.743 "req_id": 1 00:17:09.743 } 00:17:09.743 Got JSON-RPC error response 00:17:09.743 response: 00:17:09.743 { 00:17:09.743 "code": -32602, 00:17:09.743 "message": "Invalid SN uk2!/{y\"#U00nz\u007fQ0Dp^%" 00:17:09.743 }' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:09.743 { 00:17:09.743 "nqn": "nqn.2016-06.io.spdk:cnode2308", 00:17:09.743 "serial_number": "uk2!/{y\"#U00nz\u007fQ0Dp^%", 00:17:09.743 "method": "nvmf_create_subsystem", 00:17:09.743 "req_id": 1 00:17:09.743 } 00:17:09.743 Got JSON-RPC error response 00:17:09.743 response: 00:17:09.743 { 00:17:09.743 "code": -32602, 00:17:09.743 "message": "Invalid SN uk2!/{y\"#U00nz\u007fQ0Dp^%" 00:17:09.743 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:09.743 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:10.002 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ? == \- ]] 00:17:10.003 02:38:40 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '?MXf:`/M.E3_K;^]O1'\''HH /dev/null' 00:17:12.335 02:38:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.240 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:14.240 00:17:14.240 real 0m12.029s 00:17:14.240 user 0m18.627s 00:17:14.240 sys 0m5.419s 00:17:14.240 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:14.240 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:14.240 ************************************ 00:17:14.240 END TEST nvmf_invalid 00:17:14.240 ************************************ 00:17:14.499 02:38:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:14.499 02:38:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:14.499 02:38:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:14.499 02:38:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:14.499 ************************************ 00:17:14.499 START TEST nvmf_connect_stress 00:17:14.499 ************************************ 00:17:14.499 02:38:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:14.499 * Looking for test storage... 00:17:14.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:14.499 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:14.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.500 --rc genhtml_branch_coverage=1 00:17:14.500 --rc genhtml_function_coverage=1 00:17:14.500 --rc genhtml_legend=1 00:17:14.500 --rc geninfo_all_blocks=1 00:17:14.500 --rc geninfo_unexecuted_blocks=1 00:17:14.500 00:17:14.500 ' 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:14.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.500 --rc genhtml_branch_coverage=1 00:17:14.500 --rc genhtml_function_coverage=1 00:17:14.500 --rc genhtml_legend=1 00:17:14.500 --rc geninfo_all_blocks=1 00:17:14.500 --rc geninfo_unexecuted_blocks=1 00:17:14.500 00:17:14.500 ' 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:14.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.500 --rc genhtml_branch_coverage=1 00:17:14.500 --rc genhtml_function_coverage=1 00:17:14.500 --rc genhtml_legend=1 00:17:14.500 --rc geninfo_all_blocks=1 00:17:14.500 --rc geninfo_unexecuted_blocks=1 00:17:14.500 00:17:14.500 ' 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:14.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.500 --rc genhtml_branch_coverage=1 00:17:14.500 --rc genhtml_function_coverage=1 00:17:14.500 --rc genhtml_legend=1 00:17:14.500 --rc geninfo_all_blocks=1 00:17:14.500 --rc geninfo_unexecuted_blocks=1 00:17:14.500 00:17:14.500 ' 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:14.500 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.758 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:14.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:14.759 02:38:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:21.330 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:21.330 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:21.330 Found net devices under 0000:af:00.0: cvl_0_0 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.330 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:21.330 Found net devices under 0000:af:00.1: cvl_0_1 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.331 02:38:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:21.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:17:21.331 00:17:21.331 --- 10.0.0.2 ping statistics --- 00:17:21.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.331 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:17:21.331 00:17:21.331 --- 10.0.0.1 ping statistics --- 00:17:21.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.331 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=947922 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 947922 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 947922 ']' 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.331 [2024-12-16 02:38:51.265100] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:21.331 [2024-12-16 02:38:51.265151] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.331 [2024-12-16 02:38:51.345724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:21.331 [2024-12-16 02:38:51.367945] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.331 [2024-12-16 02:38:51.367980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.331 [2024-12-16 02:38:51.367987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.331 [2024-12-16 02:38:51.367993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.331 [2024-12-16 02:38:51.367999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.331 [2024-12-16 02:38:51.369225] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.331 [2024-12-16 02:38:51.369337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.331 [2024-12-16 02:38:51.369338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.331 [2024-12-16 02:38:51.500789] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.331 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.332 [2024-12-16 02:38:51.520994] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.332 NULL1 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=947949 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.332 02:38:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.900 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.900 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:21.900 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.900 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.900 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.158 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.158 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:22.158 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.158 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.158 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.417 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.417 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:22.417 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.417 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.417 02:38:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.676 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.676 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:22.676 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.676 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.676 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.935 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.935 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:22.935 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.935 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.935 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.502 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.502 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:23.502 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.502 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.502 02:38:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.761 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.761 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:23.761 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.761 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.761 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.020 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.020 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:24.020 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.020 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.020 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.279 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.279 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:24.279 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.279 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.279 02:38:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.846 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.846 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:24.846 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.846 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.846 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.105 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.105 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:25.105 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.105 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.105 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.363 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.363 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:25.363 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.363 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.363 02:38:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.622 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.622 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:25.622 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.622 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.622 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:25.881 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.881 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:25.881 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:25.881 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.881 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.449 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.449 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:26.449 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.449 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.449 02:38:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.708 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.708 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:26.708 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.708 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.708 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.967 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.967 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:26.967 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:26.967 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.967 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.225 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.225 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:27.225 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.225 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.225 02:38:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:27.484 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.484 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:27.484 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:27.484 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.484 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.052 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.052 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:28.052 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.052 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.052 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.311 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.311 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:28.311 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.311 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.311 02:38:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.570 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.570 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:28.570 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.570 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.570 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:28.828 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.828 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:28.828 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:28.828 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.828 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.395 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.395 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:29.395 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.395 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.395 02:38:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.654 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.654 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:29.654 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.654 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.654 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.912 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.912 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:29.912 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.912 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.912 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.170 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.170 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:30.170 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.170 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.170 02:39:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.428 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.429 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:30.429 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.429 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.429 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.996 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.996 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:30.996 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.996 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.996 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.996 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947949 00:17:31.256 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (947949) - No such process 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 947949 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:31.256 rmmod nvme_tcp 00:17:31.256 rmmod nvme_fabrics 00:17:31.256 rmmod nvme_keyring 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 947922 ']' 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 947922 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 947922 ']' 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 947922 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 947922 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 947922' 00:17:31.256 killing process with pid 947922 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 947922 00:17:31.256 02:39:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 947922 00:17:31.515 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:31.515 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:31.515 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:31.515 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:31.515 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:31.515 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:31.515 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:31.515 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:31.515 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:31.516 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.516 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:31.516 02:39:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.476 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:33.476 00:17:33.476 real 0m19.110s 00:17:33.476 user 0m39.362s 00:17:33.476 sys 0m8.580s 00:17:33.476 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.476 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.476 ************************************ 00:17:33.476 END TEST nvmf_connect_stress 00:17:33.476 ************************************ 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:33.794 ************************************ 00:17:33.794 START TEST nvmf_fused_ordering 00:17:33.794 ************************************ 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:33.794 * Looking for test storage... 00:17:33.794 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:33.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.794 --rc genhtml_branch_coverage=1 00:17:33.794 --rc genhtml_function_coverage=1 00:17:33.794 --rc genhtml_legend=1 00:17:33.794 --rc geninfo_all_blocks=1 00:17:33.794 --rc geninfo_unexecuted_blocks=1 00:17:33.794 00:17:33.794 ' 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:33.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.794 --rc genhtml_branch_coverage=1 00:17:33.794 --rc genhtml_function_coverage=1 00:17:33.794 --rc genhtml_legend=1 00:17:33.794 --rc geninfo_all_blocks=1 00:17:33.794 --rc geninfo_unexecuted_blocks=1 00:17:33.794 00:17:33.794 ' 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:33.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.794 --rc genhtml_branch_coverage=1 00:17:33.794 --rc genhtml_function_coverage=1 00:17:33.794 --rc genhtml_legend=1 00:17:33.794 --rc geninfo_all_blocks=1 00:17:33.794 --rc geninfo_unexecuted_blocks=1 00:17:33.794 00:17:33.794 ' 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:33.794 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:33.794 --rc genhtml_branch_coverage=1 00:17:33.794 --rc genhtml_function_coverage=1 00:17:33.794 --rc genhtml_legend=1 00:17:33.794 --rc geninfo_all_blocks=1 00:17:33.794 --rc geninfo_unexecuted_blocks=1 00:17:33.794 00:17:33.794 ' 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.794 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:33.795 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:33.795 02:39:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:40.373 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:40.373 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:40.373 Found net devices under 0000:af:00.0: cvl_0_0 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:40.373 Found net devices under 0000:af:00.1: cvl_0_1 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:40.373 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:40.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:17:40.374 00:17:40.374 --- 10.0.0.2 ping statistics --- 00:17:40.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.374 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:17:40.374 00:17:40.374 --- 10.0.0.1 ping statistics --- 00:17:40.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.374 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=953678 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 953678 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 953678 ']' 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:40.374 [2024-12-16 02:39:10.374579] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:40.374 [2024-12-16 02:39:10.374627] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.374 [2024-12-16 02:39:10.451538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.374 [2024-12-16 02:39:10.472093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.374 [2024-12-16 02:39:10.472131] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.374 [2024-12-16 02:39:10.472138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.374 [2024-12-16 02:39:10.472143] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.374 [2024-12-16 02:39:10.472149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.374 [2024-12-16 02:39:10.472628] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:40.374 [2024-12-16 02:39:10.611375] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:40.374 [2024-12-16 02:39:10.631576] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:40.374 NULL1 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.374 02:39:10 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:40.374 [2024-12-16 02:39:10.688458] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:40.374 [2024-12-16 02:39:10.688490] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid953750 ] 00:17:40.633 Attached to nqn.2016-06.io.spdk:cnode1 00:17:40.633 Namespace ID: 1 size: 1GB 00:17:40.633 fused_ordering(0) 00:17:40.633 fused_ordering(1) 00:17:40.633 fused_ordering(2) 00:17:40.633 fused_ordering(3) 00:17:40.633 fused_ordering(4) 00:17:40.633 fused_ordering(5) 00:17:40.633 fused_ordering(6) 00:17:40.633 fused_ordering(7) 00:17:40.633 fused_ordering(8) 00:17:40.633 fused_ordering(9) 00:17:40.633 fused_ordering(10) 00:17:40.633 fused_ordering(11) 00:17:40.633 fused_ordering(12) 00:17:40.633 fused_ordering(13) 00:17:40.633 fused_ordering(14) 00:17:40.633 fused_ordering(15) 00:17:40.633 fused_ordering(16) 00:17:40.633 fused_ordering(17) 00:17:40.633 fused_ordering(18) 00:17:40.633 fused_ordering(19) 00:17:40.633 fused_ordering(20) 00:17:40.633 fused_ordering(21) 00:17:40.633 fused_ordering(22) 00:17:40.633 fused_ordering(23) 00:17:40.633 fused_ordering(24) 00:17:40.633 fused_ordering(25) 00:17:40.633 fused_ordering(26) 00:17:40.633 fused_ordering(27) 00:17:40.633 fused_ordering(28) 00:17:40.633 fused_ordering(29) 00:17:40.633 fused_ordering(30) 00:17:40.633 fused_ordering(31) 00:17:40.633 fused_ordering(32) 00:17:40.633 fused_ordering(33) 00:17:40.633 fused_ordering(34) 00:17:40.633 fused_ordering(35) 00:17:40.633 fused_ordering(36) 00:17:40.633 fused_ordering(37) 00:17:40.633 fused_ordering(38) 00:17:40.633 fused_ordering(39) 00:17:40.633 fused_ordering(40) 00:17:40.633 fused_ordering(41) 00:17:40.633 fused_ordering(42) 00:17:40.633 fused_ordering(43) 00:17:40.634 fused_ordering(44) 00:17:40.634 fused_ordering(45) 00:17:40.634 fused_ordering(46) 00:17:40.634 fused_ordering(47) 00:17:40.634 fused_ordering(48) 00:17:40.634 fused_ordering(49) 00:17:40.634 fused_ordering(50) 00:17:40.634 fused_ordering(51) 00:17:40.634 fused_ordering(52) 00:17:40.634 fused_ordering(53) 00:17:40.634 fused_ordering(54) 00:17:40.634 fused_ordering(55) 00:17:40.634 fused_ordering(56) 00:17:40.634 fused_ordering(57) 00:17:40.634 fused_ordering(58) 00:17:40.634 fused_ordering(59) 00:17:40.634 fused_ordering(60) 00:17:40.634 fused_ordering(61) 00:17:40.634 fused_ordering(62) 00:17:40.634 fused_ordering(63) 00:17:40.634 fused_ordering(64) 00:17:40.634 fused_ordering(65) 00:17:40.634 fused_ordering(66) 00:17:40.634 fused_ordering(67) 00:17:40.634 fused_ordering(68) 00:17:40.634 fused_ordering(69) 00:17:40.634 fused_ordering(70) 00:17:40.634 fused_ordering(71) 00:17:40.634 fused_ordering(72) 00:17:40.634 fused_ordering(73) 00:17:40.634 fused_ordering(74) 00:17:40.634 fused_ordering(75) 00:17:40.634 fused_ordering(76) 00:17:40.634 fused_ordering(77) 00:17:40.634 fused_ordering(78) 00:17:40.634 fused_ordering(79) 00:17:40.634 fused_ordering(80) 00:17:40.634 fused_ordering(81) 00:17:40.634 fused_ordering(82) 00:17:40.634 fused_ordering(83) 00:17:40.634 fused_ordering(84) 00:17:40.634 fused_ordering(85) 00:17:40.634 fused_ordering(86) 00:17:40.634 fused_ordering(87) 00:17:40.634 fused_ordering(88) 00:17:40.634 fused_ordering(89) 00:17:40.634 fused_ordering(90) 00:17:40.634 fused_ordering(91) 00:17:40.634 fused_ordering(92) 00:17:40.634 fused_ordering(93) 00:17:40.634 fused_ordering(94) 00:17:40.634 fused_ordering(95) 00:17:40.634 fused_ordering(96) 00:17:40.634 fused_ordering(97) 00:17:40.634 fused_ordering(98) 00:17:40.634 fused_ordering(99) 00:17:40.634 fused_ordering(100) 00:17:40.634 fused_ordering(101) 00:17:40.634 fused_ordering(102) 00:17:40.634 fused_ordering(103) 00:17:40.634 fused_ordering(104) 00:17:40.634 fused_ordering(105) 00:17:40.634 fused_ordering(106) 00:17:40.634 fused_ordering(107) 00:17:40.634 fused_ordering(108) 00:17:40.634 fused_ordering(109) 00:17:40.634 fused_ordering(110) 00:17:40.634 fused_ordering(111) 00:17:40.634 fused_ordering(112) 00:17:40.634 fused_ordering(113) 00:17:40.634 fused_ordering(114) 00:17:40.634 fused_ordering(115) 00:17:40.634 fused_ordering(116) 00:17:40.634 fused_ordering(117) 00:17:40.634 fused_ordering(118) 00:17:40.634 fused_ordering(119) 00:17:40.634 fused_ordering(120) 00:17:40.634 fused_ordering(121) 00:17:40.634 fused_ordering(122) 00:17:40.634 fused_ordering(123) 00:17:40.634 fused_ordering(124) 00:17:40.634 fused_ordering(125) 00:17:40.634 fused_ordering(126) 00:17:40.634 fused_ordering(127) 00:17:40.634 fused_ordering(128) 00:17:40.634 fused_ordering(129) 00:17:40.634 fused_ordering(130) 00:17:40.634 fused_ordering(131) 00:17:40.634 fused_ordering(132) 00:17:40.634 fused_ordering(133) 00:17:40.634 fused_ordering(134) 00:17:40.634 fused_ordering(135) 00:17:40.634 fused_ordering(136) 00:17:40.634 fused_ordering(137) 00:17:40.634 fused_ordering(138) 00:17:40.634 fused_ordering(139) 00:17:40.634 fused_ordering(140) 00:17:40.634 fused_ordering(141) 00:17:40.634 fused_ordering(142) 00:17:40.634 fused_ordering(143) 00:17:40.634 fused_ordering(144) 00:17:40.634 fused_ordering(145) 00:17:40.634 fused_ordering(146) 00:17:40.634 fused_ordering(147) 00:17:40.634 fused_ordering(148) 00:17:40.634 fused_ordering(149) 00:17:40.634 fused_ordering(150) 00:17:40.634 fused_ordering(151) 00:17:40.634 fused_ordering(152) 00:17:40.634 fused_ordering(153) 00:17:40.634 fused_ordering(154) 00:17:40.634 fused_ordering(155) 00:17:40.634 fused_ordering(156) 00:17:40.634 fused_ordering(157) 00:17:40.634 fused_ordering(158) 00:17:40.634 fused_ordering(159) 00:17:40.634 fused_ordering(160) 00:17:40.634 fused_ordering(161) 00:17:40.634 fused_ordering(162) 00:17:40.634 fused_ordering(163) 00:17:40.634 fused_ordering(164) 00:17:40.634 fused_ordering(165) 00:17:40.634 fused_ordering(166) 00:17:40.634 fused_ordering(167) 00:17:40.634 fused_ordering(168) 00:17:40.634 fused_ordering(169) 00:17:40.634 fused_ordering(170) 00:17:40.634 fused_ordering(171) 00:17:40.634 fused_ordering(172) 00:17:40.634 fused_ordering(173) 00:17:40.634 fused_ordering(174) 00:17:40.634 fused_ordering(175) 00:17:40.634 fused_ordering(176) 00:17:40.634 fused_ordering(177) 00:17:40.634 fused_ordering(178) 00:17:40.634 fused_ordering(179) 00:17:40.634 fused_ordering(180) 00:17:40.634 fused_ordering(181) 00:17:40.634 fused_ordering(182) 00:17:40.634 fused_ordering(183) 00:17:40.634 fused_ordering(184) 00:17:40.634 fused_ordering(185) 00:17:40.634 fused_ordering(186) 00:17:40.634 fused_ordering(187) 00:17:40.634 fused_ordering(188) 00:17:40.634 fused_ordering(189) 00:17:40.634 fused_ordering(190) 00:17:40.634 fused_ordering(191) 00:17:40.634 fused_ordering(192) 00:17:40.634 fused_ordering(193) 00:17:40.634 fused_ordering(194) 00:17:40.634 fused_ordering(195) 00:17:40.634 fused_ordering(196) 00:17:40.634 fused_ordering(197) 00:17:40.634 fused_ordering(198) 00:17:40.634 fused_ordering(199) 00:17:40.634 fused_ordering(200) 00:17:40.634 fused_ordering(201) 00:17:40.634 fused_ordering(202) 00:17:40.634 fused_ordering(203) 00:17:40.634 fused_ordering(204) 00:17:40.634 fused_ordering(205) 00:17:40.893 fused_ordering(206) 00:17:40.893 fused_ordering(207) 00:17:40.893 fused_ordering(208) 00:17:40.893 fused_ordering(209) 00:17:40.893 fused_ordering(210) 00:17:40.893 fused_ordering(211) 00:17:40.893 fused_ordering(212) 00:17:40.893 fused_ordering(213) 00:17:40.893 fused_ordering(214) 00:17:40.893 fused_ordering(215) 00:17:40.893 fused_ordering(216) 00:17:40.893 fused_ordering(217) 00:17:40.893 fused_ordering(218) 00:17:40.893 fused_ordering(219) 00:17:40.893 fused_ordering(220) 00:17:40.893 fused_ordering(221) 00:17:40.893 fused_ordering(222) 00:17:40.893 fused_ordering(223) 00:17:40.893 fused_ordering(224) 00:17:40.893 fused_ordering(225) 00:17:40.893 fused_ordering(226) 00:17:40.893 fused_ordering(227) 00:17:40.893 fused_ordering(228) 00:17:40.893 fused_ordering(229) 00:17:40.893 fused_ordering(230) 00:17:40.893 fused_ordering(231) 00:17:40.893 fused_ordering(232) 00:17:40.893 fused_ordering(233) 00:17:40.893 fused_ordering(234) 00:17:40.893 fused_ordering(235) 00:17:40.893 fused_ordering(236) 00:17:40.893 fused_ordering(237) 00:17:40.893 fused_ordering(238) 00:17:40.893 fused_ordering(239) 00:17:40.893 fused_ordering(240) 00:17:40.893 fused_ordering(241) 00:17:40.893 fused_ordering(242) 00:17:40.893 fused_ordering(243) 00:17:40.893 fused_ordering(244) 00:17:40.893 fused_ordering(245) 00:17:40.893 fused_ordering(246) 00:17:40.893 fused_ordering(247) 00:17:40.894 fused_ordering(248) 00:17:40.894 fused_ordering(249) 00:17:40.894 fused_ordering(250) 00:17:40.894 fused_ordering(251) 00:17:40.894 fused_ordering(252) 00:17:40.894 fused_ordering(253) 00:17:40.894 fused_ordering(254) 00:17:40.894 fused_ordering(255) 00:17:40.894 fused_ordering(256) 00:17:40.894 fused_ordering(257) 00:17:40.894 fused_ordering(258) 00:17:40.894 fused_ordering(259) 00:17:40.894 fused_ordering(260) 00:17:40.894 fused_ordering(261) 00:17:40.894 fused_ordering(262) 00:17:40.894 fused_ordering(263) 00:17:40.894 fused_ordering(264) 00:17:40.894 fused_ordering(265) 00:17:40.894 fused_ordering(266) 00:17:40.894 fused_ordering(267) 00:17:40.894 fused_ordering(268) 00:17:40.894 fused_ordering(269) 00:17:40.894 fused_ordering(270) 00:17:40.894 fused_ordering(271) 00:17:40.894 fused_ordering(272) 00:17:40.894 fused_ordering(273) 00:17:40.894 fused_ordering(274) 00:17:40.894 fused_ordering(275) 00:17:40.894 fused_ordering(276) 00:17:40.894 fused_ordering(277) 00:17:40.894 fused_ordering(278) 00:17:40.894 fused_ordering(279) 00:17:40.894 fused_ordering(280) 00:17:40.894 fused_ordering(281) 00:17:40.894 fused_ordering(282) 00:17:40.894 fused_ordering(283) 00:17:40.894 fused_ordering(284) 00:17:40.894 fused_ordering(285) 00:17:40.894 fused_ordering(286) 00:17:40.894 fused_ordering(287) 00:17:40.894 fused_ordering(288) 00:17:40.894 fused_ordering(289) 00:17:40.894 fused_ordering(290) 00:17:40.894 fused_ordering(291) 00:17:40.894 fused_ordering(292) 00:17:40.894 fused_ordering(293) 00:17:40.894 fused_ordering(294) 00:17:40.894 fused_ordering(295) 00:17:40.894 fused_ordering(296) 00:17:40.894 fused_ordering(297) 00:17:40.894 fused_ordering(298) 00:17:40.894 fused_ordering(299) 00:17:40.894 fused_ordering(300) 00:17:40.894 fused_ordering(301) 00:17:40.894 fused_ordering(302) 00:17:40.894 fused_ordering(303) 00:17:40.894 fused_ordering(304) 00:17:40.894 fused_ordering(305) 00:17:40.894 fused_ordering(306) 00:17:40.894 fused_ordering(307) 00:17:40.894 fused_ordering(308) 00:17:40.894 fused_ordering(309) 00:17:40.894 fused_ordering(310) 00:17:40.894 fused_ordering(311) 00:17:40.894 fused_ordering(312) 00:17:40.894 fused_ordering(313) 00:17:40.894 fused_ordering(314) 00:17:40.894 fused_ordering(315) 00:17:40.894 fused_ordering(316) 00:17:40.894 fused_ordering(317) 00:17:40.894 fused_ordering(318) 00:17:40.894 fused_ordering(319) 00:17:40.894 fused_ordering(320) 00:17:40.894 fused_ordering(321) 00:17:40.894 fused_ordering(322) 00:17:40.894 fused_ordering(323) 00:17:40.894 fused_ordering(324) 00:17:40.894 fused_ordering(325) 00:17:40.894 fused_ordering(326) 00:17:40.894 fused_ordering(327) 00:17:40.894 fused_ordering(328) 00:17:40.894 fused_ordering(329) 00:17:40.894 fused_ordering(330) 00:17:40.894 fused_ordering(331) 00:17:40.894 fused_ordering(332) 00:17:40.894 fused_ordering(333) 00:17:40.894 fused_ordering(334) 00:17:40.894 fused_ordering(335) 00:17:40.894 fused_ordering(336) 00:17:40.894 fused_ordering(337) 00:17:40.894 fused_ordering(338) 00:17:40.894 fused_ordering(339) 00:17:40.894 fused_ordering(340) 00:17:40.894 fused_ordering(341) 00:17:40.894 fused_ordering(342) 00:17:40.894 fused_ordering(343) 00:17:40.894 fused_ordering(344) 00:17:40.894 fused_ordering(345) 00:17:40.894 fused_ordering(346) 00:17:40.894 fused_ordering(347) 00:17:40.894 fused_ordering(348) 00:17:40.894 fused_ordering(349) 00:17:40.894 fused_ordering(350) 00:17:40.894 fused_ordering(351) 00:17:40.894 fused_ordering(352) 00:17:40.894 fused_ordering(353) 00:17:40.894 fused_ordering(354) 00:17:40.894 fused_ordering(355) 00:17:40.894 fused_ordering(356) 00:17:40.894 fused_ordering(357) 00:17:40.894 fused_ordering(358) 00:17:40.894 fused_ordering(359) 00:17:40.894 fused_ordering(360) 00:17:40.894 fused_ordering(361) 00:17:40.894 fused_ordering(362) 00:17:40.894 fused_ordering(363) 00:17:40.894 fused_ordering(364) 00:17:40.894 fused_ordering(365) 00:17:40.894 fused_ordering(366) 00:17:40.894 fused_ordering(367) 00:17:40.894 fused_ordering(368) 00:17:40.894 fused_ordering(369) 00:17:40.894 fused_ordering(370) 00:17:40.894 fused_ordering(371) 00:17:40.894 fused_ordering(372) 00:17:40.894 fused_ordering(373) 00:17:40.894 fused_ordering(374) 00:17:40.894 fused_ordering(375) 00:17:40.894 fused_ordering(376) 00:17:40.894 fused_ordering(377) 00:17:40.894 fused_ordering(378) 00:17:40.894 fused_ordering(379) 00:17:40.894 fused_ordering(380) 00:17:40.894 fused_ordering(381) 00:17:40.894 fused_ordering(382) 00:17:40.894 fused_ordering(383) 00:17:40.894 fused_ordering(384) 00:17:40.894 fused_ordering(385) 00:17:40.894 fused_ordering(386) 00:17:40.894 fused_ordering(387) 00:17:40.894 fused_ordering(388) 00:17:40.894 fused_ordering(389) 00:17:40.894 fused_ordering(390) 00:17:40.894 fused_ordering(391) 00:17:40.894 fused_ordering(392) 00:17:40.894 fused_ordering(393) 00:17:40.894 fused_ordering(394) 00:17:40.894 fused_ordering(395) 00:17:40.894 fused_ordering(396) 00:17:40.894 fused_ordering(397) 00:17:40.894 fused_ordering(398) 00:17:40.894 fused_ordering(399) 00:17:40.894 fused_ordering(400) 00:17:40.894 fused_ordering(401) 00:17:40.894 fused_ordering(402) 00:17:40.894 fused_ordering(403) 00:17:40.894 fused_ordering(404) 00:17:40.894 fused_ordering(405) 00:17:40.894 fused_ordering(406) 00:17:40.894 fused_ordering(407) 00:17:40.894 fused_ordering(408) 00:17:40.894 fused_ordering(409) 00:17:40.894 fused_ordering(410) 00:17:41.154 fused_ordering(411) 00:17:41.154 fused_ordering(412) 00:17:41.154 fused_ordering(413) 00:17:41.154 fused_ordering(414) 00:17:41.154 fused_ordering(415) 00:17:41.154 fused_ordering(416) 00:17:41.154 fused_ordering(417) 00:17:41.154 fused_ordering(418) 00:17:41.154 fused_ordering(419) 00:17:41.154 fused_ordering(420) 00:17:41.154 fused_ordering(421) 00:17:41.154 fused_ordering(422) 00:17:41.154 fused_ordering(423) 00:17:41.154 fused_ordering(424) 00:17:41.154 fused_ordering(425) 00:17:41.154 fused_ordering(426) 00:17:41.154 fused_ordering(427) 00:17:41.154 fused_ordering(428) 00:17:41.154 fused_ordering(429) 00:17:41.154 fused_ordering(430) 00:17:41.154 fused_ordering(431) 00:17:41.154 fused_ordering(432) 00:17:41.154 fused_ordering(433) 00:17:41.154 fused_ordering(434) 00:17:41.154 fused_ordering(435) 00:17:41.154 fused_ordering(436) 00:17:41.154 fused_ordering(437) 00:17:41.154 fused_ordering(438) 00:17:41.154 fused_ordering(439) 00:17:41.154 fused_ordering(440) 00:17:41.154 fused_ordering(441) 00:17:41.154 fused_ordering(442) 00:17:41.154 fused_ordering(443) 00:17:41.154 fused_ordering(444) 00:17:41.154 fused_ordering(445) 00:17:41.154 fused_ordering(446) 00:17:41.154 fused_ordering(447) 00:17:41.154 fused_ordering(448) 00:17:41.154 fused_ordering(449) 00:17:41.154 fused_ordering(450) 00:17:41.154 fused_ordering(451) 00:17:41.154 fused_ordering(452) 00:17:41.154 fused_ordering(453) 00:17:41.154 fused_ordering(454) 00:17:41.154 fused_ordering(455) 00:17:41.154 fused_ordering(456) 00:17:41.154 fused_ordering(457) 00:17:41.154 fused_ordering(458) 00:17:41.154 fused_ordering(459) 00:17:41.154 fused_ordering(460) 00:17:41.154 fused_ordering(461) 00:17:41.154 fused_ordering(462) 00:17:41.154 fused_ordering(463) 00:17:41.154 fused_ordering(464) 00:17:41.154 fused_ordering(465) 00:17:41.154 fused_ordering(466) 00:17:41.154 fused_ordering(467) 00:17:41.154 fused_ordering(468) 00:17:41.154 fused_ordering(469) 00:17:41.154 fused_ordering(470) 00:17:41.154 fused_ordering(471) 00:17:41.154 fused_ordering(472) 00:17:41.154 fused_ordering(473) 00:17:41.154 fused_ordering(474) 00:17:41.154 fused_ordering(475) 00:17:41.154 fused_ordering(476) 00:17:41.154 fused_ordering(477) 00:17:41.154 fused_ordering(478) 00:17:41.154 fused_ordering(479) 00:17:41.154 fused_ordering(480) 00:17:41.154 fused_ordering(481) 00:17:41.154 fused_ordering(482) 00:17:41.154 fused_ordering(483) 00:17:41.154 fused_ordering(484) 00:17:41.154 fused_ordering(485) 00:17:41.154 fused_ordering(486) 00:17:41.154 fused_ordering(487) 00:17:41.154 fused_ordering(488) 00:17:41.154 fused_ordering(489) 00:17:41.154 fused_ordering(490) 00:17:41.154 fused_ordering(491) 00:17:41.154 fused_ordering(492) 00:17:41.154 fused_ordering(493) 00:17:41.154 fused_ordering(494) 00:17:41.154 fused_ordering(495) 00:17:41.154 fused_ordering(496) 00:17:41.154 fused_ordering(497) 00:17:41.154 fused_ordering(498) 00:17:41.154 fused_ordering(499) 00:17:41.154 fused_ordering(500) 00:17:41.154 fused_ordering(501) 00:17:41.154 fused_ordering(502) 00:17:41.154 fused_ordering(503) 00:17:41.154 fused_ordering(504) 00:17:41.154 fused_ordering(505) 00:17:41.154 fused_ordering(506) 00:17:41.154 fused_ordering(507) 00:17:41.154 fused_ordering(508) 00:17:41.154 fused_ordering(509) 00:17:41.154 fused_ordering(510) 00:17:41.154 fused_ordering(511) 00:17:41.154 fused_ordering(512) 00:17:41.154 fused_ordering(513) 00:17:41.154 fused_ordering(514) 00:17:41.154 fused_ordering(515) 00:17:41.154 fused_ordering(516) 00:17:41.154 fused_ordering(517) 00:17:41.154 fused_ordering(518) 00:17:41.154 fused_ordering(519) 00:17:41.154 fused_ordering(520) 00:17:41.154 fused_ordering(521) 00:17:41.154 fused_ordering(522) 00:17:41.154 fused_ordering(523) 00:17:41.154 fused_ordering(524) 00:17:41.154 fused_ordering(525) 00:17:41.154 fused_ordering(526) 00:17:41.154 fused_ordering(527) 00:17:41.154 fused_ordering(528) 00:17:41.154 fused_ordering(529) 00:17:41.154 fused_ordering(530) 00:17:41.154 fused_ordering(531) 00:17:41.154 fused_ordering(532) 00:17:41.154 fused_ordering(533) 00:17:41.154 fused_ordering(534) 00:17:41.154 fused_ordering(535) 00:17:41.154 fused_ordering(536) 00:17:41.154 fused_ordering(537) 00:17:41.154 fused_ordering(538) 00:17:41.154 fused_ordering(539) 00:17:41.154 fused_ordering(540) 00:17:41.154 fused_ordering(541) 00:17:41.154 fused_ordering(542) 00:17:41.154 fused_ordering(543) 00:17:41.154 fused_ordering(544) 00:17:41.154 fused_ordering(545) 00:17:41.154 fused_ordering(546) 00:17:41.154 fused_ordering(547) 00:17:41.154 fused_ordering(548) 00:17:41.154 fused_ordering(549) 00:17:41.154 fused_ordering(550) 00:17:41.154 fused_ordering(551) 00:17:41.154 fused_ordering(552) 00:17:41.154 fused_ordering(553) 00:17:41.154 fused_ordering(554) 00:17:41.154 fused_ordering(555) 00:17:41.154 fused_ordering(556) 00:17:41.154 fused_ordering(557) 00:17:41.154 fused_ordering(558) 00:17:41.154 fused_ordering(559) 00:17:41.154 fused_ordering(560) 00:17:41.154 fused_ordering(561) 00:17:41.154 fused_ordering(562) 00:17:41.154 fused_ordering(563) 00:17:41.154 fused_ordering(564) 00:17:41.154 fused_ordering(565) 00:17:41.154 fused_ordering(566) 00:17:41.154 fused_ordering(567) 00:17:41.154 fused_ordering(568) 00:17:41.154 fused_ordering(569) 00:17:41.154 fused_ordering(570) 00:17:41.154 fused_ordering(571) 00:17:41.154 fused_ordering(572) 00:17:41.154 fused_ordering(573) 00:17:41.154 fused_ordering(574) 00:17:41.154 fused_ordering(575) 00:17:41.154 fused_ordering(576) 00:17:41.154 fused_ordering(577) 00:17:41.154 fused_ordering(578) 00:17:41.154 fused_ordering(579) 00:17:41.154 fused_ordering(580) 00:17:41.154 fused_ordering(581) 00:17:41.154 fused_ordering(582) 00:17:41.154 fused_ordering(583) 00:17:41.154 fused_ordering(584) 00:17:41.154 fused_ordering(585) 00:17:41.154 fused_ordering(586) 00:17:41.154 fused_ordering(587) 00:17:41.154 fused_ordering(588) 00:17:41.154 fused_ordering(589) 00:17:41.154 fused_ordering(590) 00:17:41.154 fused_ordering(591) 00:17:41.154 fused_ordering(592) 00:17:41.154 fused_ordering(593) 00:17:41.154 fused_ordering(594) 00:17:41.154 fused_ordering(595) 00:17:41.154 fused_ordering(596) 00:17:41.154 fused_ordering(597) 00:17:41.154 fused_ordering(598) 00:17:41.154 fused_ordering(599) 00:17:41.154 fused_ordering(600) 00:17:41.154 fused_ordering(601) 00:17:41.154 fused_ordering(602) 00:17:41.154 fused_ordering(603) 00:17:41.154 fused_ordering(604) 00:17:41.154 fused_ordering(605) 00:17:41.154 fused_ordering(606) 00:17:41.154 fused_ordering(607) 00:17:41.154 fused_ordering(608) 00:17:41.154 fused_ordering(609) 00:17:41.154 fused_ordering(610) 00:17:41.154 fused_ordering(611) 00:17:41.154 fused_ordering(612) 00:17:41.154 fused_ordering(613) 00:17:41.154 fused_ordering(614) 00:17:41.154 fused_ordering(615) 00:17:41.414 fused_ordering(616) 00:17:41.414 fused_ordering(617) 00:17:41.414 fused_ordering(618) 00:17:41.414 fused_ordering(619) 00:17:41.414 fused_ordering(620) 00:17:41.414 fused_ordering(621) 00:17:41.414 fused_ordering(622) 00:17:41.414 fused_ordering(623) 00:17:41.414 fused_ordering(624) 00:17:41.414 fused_ordering(625) 00:17:41.414 fused_ordering(626) 00:17:41.414 fused_ordering(627) 00:17:41.414 fused_ordering(628) 00:17:41.414 fused_ordering(629) 00:17:41.414 fused_ordering(630) 00:17:41.414 fused_ordering(631) 00:17:41.414 fused_ordering(632) 00:17:41.414 fused_ordering(633) 00:17:41.414 fused_ordering(634) 00:17:41.414 fused_ordering(635) 00:17:41.414 fused_ordering(636) 00:17:41.414 fused_ordering(637) 00:17:41.414 fused_ordering(638) 00:17:41.414 fused_ordering(639) 00:17:41.414 fused_ordering(640) 00:17:41.414 fused_ordering(641) 00:17:41.414 fused_ordering(642) 00:17:41.414 fused_ordering(643) 00:17:41.414 fused_ordering(644) 00:17:41.414 fused_ordering(645) 00:17:41.414 fused_ordering(646) 00:17:41.414 fused_ordering(647) 00:17:41.414 fused_ordering(648) 00:17:41.414 fused_ordering(649) 00:17:41.414 fused_ordering(650) 00:17:41.414 fused_ordering(651) 00:17:41.414 fused_ordering(652) 00:17:41.414 fused_ordering(653) 00:17:41.414 fused_ordering(654) 00:17:41.414 fused_ordering(655) 00:17:41.414 fused_ordering(656) 00:17:41.414 fused_ordering(657) 00:17:41.414 fused_ordering(658) 00:17:41.414 fused_ordering(659) 00:17:41.414 fused_ordering(660) 00:17:41.414 fused_ordering(661) 00:17:41.414 fused_ordering(662) 00:17:41.414 fused_ordering(663) 00:17:41.414 fused_ordering(664) 00:17:41.414 fused_ordering(665) 00:17:41.414 fused_ordering(666) 00:17:41.414 fused_ordering(667) 00:17:41.414 fused_ordering(668) 00:17:41.414 fused_ordering(669) 00:17:41.414 fused_ordering(670) 00:17:41.414 fused_ordering(671) 00:17:41.414 fused_ordering(672) 00:17:41.414 fused_ordering(673) 00:17:41.414 fused_ordering(674) 00:17:41.414 fused_ordering(675) 00:17:41.414 fused_ordering(676) 00:17:41.414 fused_ordering(677) 00:17:41.414 fused_ordering(678) 00:17:41.414 fused_ordering(679) 00:17:41.414 fused_ordering(680) 00:17:41.414 fused_ordering(681) 00:17:41.414 fused_ordering(682) 00:17:41.414 fused_ordering(683) 00:17:41.414 fused_ordering(684) 00:17:41.414 fused_ordering(685) 00:17:41.414 fused_ordering(686) 00:17:41.414 fused_ordering(687) 00:17:41.414 fused_ordering(688) 00:17:41.414 fused_ordering(689) 00:17:41.414 fused_ordering(690) 00:17:41.414 fused_ordering(691) 00:17:41.414 fused_ordering(692) 00:17:41.414 fused_ordering(693) 00:17:41.414 fused_ordering(694) 00:17:41.414 fused_ordering(695) 00:17:41.414 fused_ordering(696) 00:17:41.414 fused_ordering(697) 00:17:41.414 fused_ordering(698) 00:17:41.414 fused_ordering(699) 00:17:41.414 fused_ordering(700) 00:17:41.414 fused_ordering(701) 00:17:41.414 fused_ordering(702) 00:17:41.414 fused_ordering(703) 00:17:41.414 fused_ordering(704) 00:17:41.414 fused_ordering(705) 00:17:41.414 fused_ordering(706) 00:17:41.414 fused_ordering(707) 00:17:41.414 fused_ordering(708) 00:17:41.414 fused_ordering(709) 00:17:41.414 fused_ordering(710) 00:17:41.414 fused_ordering(711) 00:17:41.414 fused_ordering(712) 00:17:41.414 fused_ordering(713) 00:17:41.414 fused_ordering(714) 00:17:41.414 fused_ordering(715) 00:17:41.414 fused_ordering(716) 00:17:41.414 fused_ordering(717) 00:17:41.414 fused_ordering(718) 00:17:41.414 fused_ordering(719) 00:17:41.414 fused_ordering(720) 00:17:41.414 fused_ordering(721) 00:17:41.414 fused_ordering(722) 00:17:41.414 fused_ordering(723) 00:17:41.414 fused_ordering(724) 00:17:41.414 fused_ordering(725) 00:17:41.414 fused_ordering(726) 00:17:41.414 fused_ordering(727) 00:17:41.414 fused_ordering(728) 00:17:41.414 fused_ordering(729) 00:17:41.414 fused_ordering(730) 00:17:41.414 fused_ordering(731) 00:17:41.414 fused_ordering(732) 00:17:41.414 fused_ordering(733) 00:17:41.414 fused_ordering(734) 00:17:41.414 fused_ordering(735) 00:17:41.414 fused_ordering(736) 00:17:41.414 fused_ordering(737) 00:17:41.414 fused_ordering(738) 00:17:41.414 fused_ordering(739) 00:17:41.414 fused_ordering(740) 00:17:41.414 fused_ordering(741) 00:17:41.414 fused_ordering(742) 00:17:41.414 fused_ordering(743) 00:17:41.414 fused_ordering(744) 00:17:41.414 fused_ordering(745) 00:17:41.414 fused_ordering(746) 00:17:41.414 fused_ordering(747) 00:17:41.414 fused_ordering(748) 00:17:41.414 fused_ordering(749) 00:17:41.414 fused_ordering(750) 00:17:41.414 fused_ordering(751) 00:17:41.414 fused_ordering(752) 00:17:41.414 fused_ordering(753) 00:17:41.414 fused_ordering(754) 00:17:41.414 fused_ordering(755) 00:17:41.414 fused_ordering(756) 00:17:41.414 fused_ordering(757) 00:17:41.414 fused_ordering(758) 00:17:41.414 fused_ordering(759) 00:17:41.414 fused_ordering(760) 00:17:41.414 fused_ordering(761) 00:17:41.414 fused_ordering(762) 00:17:41.414 fused_ordering(763) 00:17:41.414 fused_ordering(764) 00:17:41.414 fused_ordering(765) 00:17:41.414 fused_ordering(766) 00:17:41.414 fused_ordering(767) 00:17:41.414 fused_ordering(768) 00:17:41.414 fused_ordering(769) 00:17:41.414 fused_ordering(770) 00:17:41.414 fused_ordering(771) 00:17:41.414 fused_ordering(772) 00:17:41.414 fused_ordering(773) 00:17:41.414 fused_ordering(774) 00:17:41.414 fused_ordering(775) 00:17:41.414 fused_ordering(776) 00:17:41.414 fused_ordering(777) 00:17:41.414 fused_ordering(778) 00:17:41.414 fused_ordering(779) 00:17:41.414 fused_ordering(780) 00:17:41.414 fused_ordering(781) 00:17:41.414 fused_ordering(782) 00:17:41.414 fused_ordering(783) 00:17:41.414 fused_ordering(784) 00:17:41.414 fused_ordering(785) 00:17:41.414 fused_ordering(786) 00:17:41.414 fused_ordering(787) 00:17:41.414 fused_ordering(788) 00:17:41.414 fused_ordering(789) 00:17:41.414 fused_ordering(790) 00:17:41.414 fused_ordering(791) 00:17:41.414 fused_ordering(792) 00:17:41.414 fused_ordering(793) 00:17:41.414 fused_ordering(794) 00:17:41.414 fused_ordering(795) 00:17:41.414 fused_ordering(796) 00:17:41.414 fused_ordering(797) 00:17:41.414 fused_ordering(798) 00:17:41.414 fused_ordering(799) 00:17:41.414 fused_ordering(800) 00:17:41.414 fused_ordering(801) 00:17:41.414 fused_ordering(802) 00:17:41.414 fused_ordering(803) 00:17:41.414 fused_ordering(804) 00:17:41.414 fused_ordering(805) 00:17:41.414 fused_ordering(806) 00:17:41.414 fused_ordering(807) 00:17:41.414 fused_ordering(808) 00:17:41.414 fused_ordering(809) 00:17:41.414 fused_ordering(810) 00:17:41.414 fused_ordering(811) 00:17:41.414 fused_ordering(812) 00:17:41.414 fused_ordering(813) 00:17:41.414 fused_ordering(814) 00:17:41.414 fused_ordering(815) 00:17:41.414 fused_ordering(816) 00:17:41.414 fused_ordering(817) 00:17:41.414 fused_ordering(818) 00:17:41.414 fused_ordering(819) 00:17:41.414 fused_ordering(820) 00:17:41.982 fused_o[2024-12-16 02:39:12.443220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc80a10 is same with the state(6) to be set 00:17:41.982 rdering(821) 00:17:41.982 fused_ordering(822) 00:17:41.982 fused_ordering(823) 00:17:41.982 fused_ordering(824) 00:17:41.982 fused_ordering(825) 00:17:41.982 fused_ordering(826) 00:17:41.982 fused_ordering(827) 00:17:41.982 fused_ordering(828) 00:17:41.982 fused_ordering(829) 00:17:41.982 fused_ordering(830) 00:17:41.982 fused_ordering(831) 00:17:41.982 fused_ordering(832) 00:17:41.982 fused_ordering(833) 00:17:41.982 fused_ordering(834) 00:17:41.982 fused_ordering(835) 00:17:41.982 fused_ordering(836) 00:17:41.982 fused_ordering(837) 00:17:41.982 fused_ordering(838) 00:17:41.982 fused_ordering(839) 00:17:41.982 fused_ordering(840) 00:17:41.982 fused_ordering(841) 00:17:41.982 fused_ordering(842) 00:17:41.982 fused_ordering(843) 00:17:41.982 fused_ordering(844) 00:17:41.982 fused_ordering(845) 00:17:41.982 fused_ordering(846) 00:17:41.982 fused_ordering(847) 00:17:41.982 fused_ordering(848) 00:17:41.982 fused_ordering(849) 00:17:41.982 fused_ordering(850) 00:17:41.982 fused_ordering(851) 00:17:41.982 fused_ordering(852) 00:17:41.982 fused_ordering(853) 00:17:41.982 fused_ordering(854) 00:17:41.982 fused_ordering(855) 00:17:41.982 fused_ordering(856) 00:17:41.982 fused_ordering(857) 00:17:41.982 fused_ordering(858) 00:17:41.982 fused_ordering(859) 00:17:41.982 fused_ordering(860) 00:17:41.982 fused_ordering(861) 00:17:41.982 fused_ordering(862) 00:17:41.982 fused_ordering(863) 00:17:41.982 fused_ordering(864) 00:17:41.982 fused_ordering(865) 00:17:41.982 fused_ordering(866) 00:17:41.982 fused_ordering(867) 00:17:41.982 fused_ordering(868) 00:17:41.982 fused_ordering(869) 00:17:41.982 fused_ordering(870) 00:17:41.982 fused_ordering(871) 00:17:41.983 fused_ordering(872) 00:17:41.983 fused_ordering(873) 00:17:41.983 fused_ordering(874) 00:17:41.983 fused_ordering(875) 00:17:41.983 fused_ordering(876) 00:17:41.983 fused_ordering(877) 00:17:41.983 fused_ordering(878) 00:17:41.983 fused_ordering(879) 00:17:41.983 fused_ordering(880) 00:17:41.983 fused_ordering(881) 00:17:41.983 fused_ordering(882) 00:17:41.983 fused_ordering(883) 00:17:41.983 fused_ordering(884) 00:17:41.983 fused_ordering(885) 00:17:41.983 fused_ordering(886) 00:17:41.983 fused_ordering(887) 00:17:41.983 fused_ordering(888) 00:17:41.983 fused_ordering(889) 00:17:41.983 fused_ordering(890) 00:17:41.983 fused_ordering(891) 00:17:41.983 fused_ordering(892) 00:17:41.983 fused_ordering(893) 00:17:41.983 fused_ordering(894) 00:17:41.983 fused_ordering(895) 00:17:41.983 fused_ordering(896) 00:17:41.983 fused_ordering(897) 00:17:41.983 fused_ordering(898) 00:17:41.983 fused_ordering(899) 00:17:41.983 fused_ordering(900) 00:17:41.983 fused_ordering(901) 00:17:41.983 fused_ordering(902) 00:17:41.983 fused_ordering(903) 00:17:41.983 fused_ordering(904) 00:17:41.983 fused_ordering(905) 00:17:41.983 fused_ordering(906) 00:17:41.983 fused_ordering(907) 00:17:41.983 fused_ordering(908) 00:17:41.983 fused_ordering(909) 00:17:41.983 fused_ordering(910) 00:17:41.983 fused_ordering(911) 00:17:41.983 fused_ordering(912) 00:17:41.983 fused_ordering(913) 00:17:41.983 fused_ordering(914) 00:17:41.983 fused_ordering(915) 00:17:41.983 fused_ordering(916) 00:17:41.983 fused_ordering(917) 00:17:41.983 fused_ordering(918) 00:17:41.983 fused_ordering(919) 00:17:41.983 fused_ordering(920) 00:17:41.983 fused_ordering(921) 00:17:41.983 fused_ordering(922) 00:17:41.983 fused_ordering(923) 00:17:41.983 fused_ordering(924) 00:17:41.983 fused_ordering(925) 00:17:41.983 fused_ordering(926) 00:17:41.983 fused_ordering(927) 00:17:41.983 fused_ordering(928) 00:17:41.983 fused_ordering(929) 00:17:41.983 fused_ordering(930) 00:17:41.983 fused_ordering(931) 00:17:41.983 fused_ordering(932) 00:17:41.983 fused_ordering(933) 00:17:41.983 fused_ordering(934) 00:17:41.983 fused_ordering(935) 00:17:41.983 fused_ordering(936) 00:17:41.983 fused_ordering(937) 00:17:41.983 fused_ordering(938) 00:17:41.983 fused_ordering(939) 00:17:41.983 fused_ordering(940) 00:17:41.983 fused_ordering(941) 00:17:41.983 fused_ordering(942) 00:17:41.983 fused_ordering(943) 00:17:41.983 fused_ordering(944) 00:17:41.983 fused_ordering(945) 00:17:41.983 fused_ordering(946) 00:17:41.983 fused_ordering(947) 00:17:41.983 fused_ordering(948) 00:17:41.983 fused_ordering(949) 00:17:41.983 fused_ordering(950) 00:17:41.983 fused_ordering(951) 00:17:41.983 fused_ordering(952) 00:17:41.983 fused_ordering(953) 00:17:41.983 fused_ordering(954) 00:17:41.983 fused_ordering(955) 00:17:41.983 fused_ordering(956) 00:17:41.983 fused_ordering(957) 00:17:41.983 fused_ordering(958) 00:17:41.983 fused_ordering(959) 00:17:41.983 fused_ordering(960) 00:17:41.983 fused_ordering(961) 00:17:41.983 fused_ordering(962) 00:17:41.983 fused_ordering(963) 00:17:41.983 fused_ordering(964) 00:17:41.983 fused_ordering(965) 00:17:41.983 fused_ordering(966) 00:17:41.983 fused_ordering(967) 00:17:41.983 fused_ordering(968) 00:17:41.983 fused_ordering(969) 00:17:41.983 fused_ordering(970) 00:17:41.983 fused_ordering(971) 00:17:41.983 fused_ordering(972) 00:17:41.983 fused_ordering(973) 00:17:41.983 fused_ordering(974) 00:17:41.983 fused_ordering(975) 00:17:41.983 fused_ordering(976) 00:17:41.983 fused_ordering(977) 00:17:41.983 fused_ordering(978) 00:17:41.983 fused_ordering(979) 00:17:41.983 fused_ordering(980) 00:17:41.983 fused_ordering(981) 00:17:41.983 fused_ordering(982) 00:17:41.983 fused_ordering(983) 00:17:41.983 fused_ordering(984) 00:17:41.983 fused_ordering(985) 00:17:41.983 fused_ordering(986) 00:17:41.983 fused_ordering(987) 00:17:41.983 fused_ordering(988) 00:17:41.983 fused_ordering(989) 00:17:41.983 fused_ordering(990) 00:17:41.983 fused_ordering(991) 00:17:41.983 fused_ordering(992) 00:17:41.983 fused_ordering(993) 00:17:41.983 fused_ordering(994) 00:17:41.983 fused_ordering(995) 00:17:41.983 fused_ordering(996) 00:17:41.983 fused_ordering(997) 00:17:41.983 fused_ordering(998) 00:17:41.983 fused_ordering(999) 00:17:41.983 fused_ordering(1000) 00:17:41.983 fused_ordering(1001) 00:17:41.983 fused_ordering(1002) 00:17:41.983 fused_ordering(1003) 00:17:41.983 fused_ordering(1004) 00:17:41.983 fused_ordering(1005) 00:17:41.983 fused_ordering(1006) 00:17:41.983 fused_ordering(1007) 00:17:41.983 fused_ordering(1008) 00:17:41.983 fused_ordering(1009) 00:17:41.983 fused_ordering(1010) 00:17:41.983 fused_ordering(1011) 00:17:41.983 fused_ordering(1012) 00:17:41.983 fused_ordering(1013) 00:17:41.983 fused_ordering(1014) 00:17:41.983 fused_ordering(1015) 00:17:41.983 fused_ordering(1016) 00:17:41.983 fused_ordering(1017) 00:17:41.983 fused_ordering(1018) 00:17:41.983 fused_ordering(1019) 00:17:41.983 fused_ordering(1020) 00:17:41.983 fused_ordering(1021) 00:17:41.983 fused_ordering(1022) 00:17:41.983 fused_ordering(1023) 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:41.983 rmmod nvme_tcp 00:17:41.983 rmmod nvme_fabrics 00:17:41.983 rmmod nvme_keyring 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 953678 ']' 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 953678 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 953678 ']' 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 953678 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 953678 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 953678' 00:17:41.983 killing process with pid 953678 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 953678 00:17:41.983 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 953678 00:17:42.242 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:42.242 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:42.242 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:42.242 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:42.242 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:42.242 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:42.242 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:42.242 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:42.242 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:42.242 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.242 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:42.242 02:39:12 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.775 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:44.775 00:17:44.775 real 0m10.661s 00:17:44.775 user 0m4.944s 00:17:44.775 sys 0m5.850s 00:17:44.775 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:44.775 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:44.775 ************************************ 00:17:44.775 END TEST nvmf_fused_ordering 00:17:44.775 ************************************ 00:17:44.775 02:39:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:44.775 02:39:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:44.775 02:39:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:44.775 02:39:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:44.775 ************************************ 00:17:44.775 START TEST nvmf_ns_masking 00:17:44.775 ************************************ 00:17:44.776 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:44.776 * Looking for test storage... 00:17:44.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.776 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:44.776 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:44.776 02:39:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:44.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.776 --rc genhtml_branch_coverage=1 00:17:44.776 --rc genhtml_function_coverage=1 00:17:44.776 --rc genhtml_legend=1 00:17:44.776 --rc geninfo_all_blocks=1 00:17:44.776 --rc geninfo_unexecuted_blocks=1 00:17:44.776 00:17:44.776 ' 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:44.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.776 --rc genhtml_branch_coverage=1 00:17:44.776 --rc genhtml_function_coverage=1 00:17:44.776 --rc genhtml_legend=1 00:17:44.776 --rc geninfo_all_blocks=1 00:17:44.776 --rc geninfo_unexecuted_blocks=1 00:17:44.776 00:17:44.776 ' 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:44.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.776 --rc genhtml_branch_coverage=1 00:17:44.776 --rc genhtml_function_coverage=1 00:17:44.776 --rc genhtml_legend=1 00:17:44.776 --rc geninfo_all_blocks=1 00:17:44.776 --rc geninfo_unexecuted_blocks=1 00:17:44.776 00:17:44.776 ' 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:44.776 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:44.776 --rc genhtml_branch_coverage=1 00:17:44.776 --rc genhtml_function_coverage=1 00:17:44.776 --rc genhtml_legend=1 00:17:44.776 --rc geninfo_all_blocks=1 00:17:44.776 --rc geninfo_unexecuted_blocks=1 00:17:44.776 00:17:44.776 ' 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:44.776 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d67313ef-428c-437d-9eab-9d385a617912 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=ea8b6a14-e97b-4a66-a26a-6a38d3739dd8 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=8ab1e0ab-368f-47fa-8b10-39a5cb7bd1fc 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:44.776 02:39:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.344 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:51.345 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:51.345 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:51.345 Found net devices under 0000:af:00.0: cvl_0_0 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:51.345 Found net devices under 0000:af:00.1: cvl_0_1 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:51.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.473 ms 00:17:51.345 00:17:51.345 --- 10.0.0.2 ping statistics --- 00:17:51.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.345 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:17:51.345 02:39:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:17:51.345 00:17:51.345 --- 10.0.0.1 ping statistics --- 00:17:51.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.345 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:17:51.345 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.345 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:51.345 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:51.345 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.345 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:51.345 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:51.345 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.345 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:51.345 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:51.345 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:51.345 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=957448 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 957448 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 957448 ']' 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:51.346 [2024-12-16 02:39:21.109902] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:51.346 [2024-12-16 02:39:21.109955] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.346 [2024-12-16 02:39:21.187726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.346 [2024-12-16 02:39:21.208894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.346 [2024-12-16 02:39:21.208930] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.346 [2024-12-16 02:39:21.208937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.346 [2024-12-16 02:39:21.208943] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.346 [2024-12-16 02:39:21.208948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.346 [2024-12-16 02:39:21.209413] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:51.346 [2024-12-16 02:39:21.525330] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:51.346 Malloc1 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:51.346 Malloc2 00:17:51.346 02:39:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:51.605 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:51.863 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.122 [2024-12-16 02:39:22.526729] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.122 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:52.122 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8ab1e0ab-368f-47fa-8b10-39a5cb7bd1fc -a 10.0.0.2 -s 4420 -i 4 00:17:52.122 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:52.122 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:52.122 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.122 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:52.122 02:39:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:54.659 [ 0]:0x1 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=266901bee4c446a1af935d3304bdcf5f 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 266901bee4c446a1af935d3304bdcf5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.659 02:39:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:54.659 [ 0]:0x1 00:17:54.659 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:54.659 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.659 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=266901bee4c446a1af935d3304bdcf5f 00:17:54.659 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 266901bee4c446a1af935d3304bdcf5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.659 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:54.659 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.659 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:54.660 [ 1]:0x2 00:17:54.660 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:54.660 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.660 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=facd4318693c431db7805a338218a384 00:17:54.660 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ facd4318693c431db7805a338218a384 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.660 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:54.660 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.660 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:54.919 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:55.177 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:55.177 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8ab1e0ab-368f-47fa-8b10-39a5cb7bd1fc -a 10.0.0.2 -s 4420 -i 4 00:17:55.177 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:55.177 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:55.177 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:55.177 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:55.177 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:55.177 02:39:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:57.711 02:39:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:57.711 [ 0]:0x2 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=facd4318693c431db7805a338218a384 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ facd4318693c431db7805a338218a384 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:57.711 [ 0]:0x1 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=266901bee4c446a1af935d3304bdcf5f 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 266901bee4c446a1af935d3304bdcf5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.711 [ 1]:0x2 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:57.711 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=facd4318693c431db7805a338218a384 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ facd4318693c431db7805a338218a384 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:57.970 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:58.229 [ 0]:0x2 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=facd4318693c431db7805a338218a384 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ facd4318693c431db7805a338218a384 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:58.229 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:58.230 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.230 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:58.488 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:58.488 02:39:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8ab1e0ab-368f-47fa-8b10-39a5cb7bd1fc -a 10.0.0.2 -s 4420 -i 4 00:17:58.488 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:58.488 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:58.488 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:58.488 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:58.488 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:58.488 02:39:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:01.023 [ 0]:0x1 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=266901bee4c446a1af935d3304bdcf5f 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 266901bee4c446a1af935d3304bdcf5f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.023 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:01.024 [ 1]:0x2 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=facd4318693c431db7805a338218a384 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ facd4318693c431db7805a338218a384 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:01.024 [ 0]:0x2 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=facd4318693c431db7805a338218a384 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ facd4318693c431db7805a338218a384 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:01.024 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:01.283 [2024-12-16 02:39:31.745014] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:01.283 request: 00:18:01.283 { 00:18:01.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:01.283 "nsid": 2, 00:18:01.283 "host": "nqn.2016-06.io.spdk:host1", 00:18:01.283 "method": "nvmf_ns_remove_host", 00:18:01.283 "req_id": 1 00:18:01.283 } 00:18:01.283 Got JSON-RPC error response 00:18:01.283 response: 00:18:01.283 { 00:18:01.283 "code": -32602, 00:18:01.283 "message": "Invalid parameters" 00:18:01.283 } 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:01.283 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.284 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:01.284 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:01.284 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:01.284 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:01.284 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:01.284 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:01.284 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:01.284 [ 0]:0x2 00:18:01.284 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:01.284 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:01.543 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=facd4318693c431db7805a338218a384 00:18:01.543 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ facd4318693c431db7805a338218a384 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:01.543 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:01.543 02:39:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:01.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:01.543 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=959397 00:18:01.543 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:01.543 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.543 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 959397 /var/tmp/host.sock 00:18:01.543 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 959397 ']' 00:18:01.543 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:01.543 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:01.543 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:01.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:01.543 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:01.543 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:01.543 [2024-12-16 02:39:32.116661] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:01.543 [2024-12-16 02:39:32.116708] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid959397 ] 00:18:01.543 [2024-12-16 02:39:32.188866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.801 [2024-12-16 02:39:32.210559] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.801 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.801 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:01.801 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:02.060 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:02.318 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d67313ef-428c-437d-9eab-9d385a617912 00:18:02.318 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:02.318 02:39:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D67313EF428C437D9EAB9D385A617912 -i 00:18:02.576 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid ea8b6a14-e97b-4a66-a26a-6a38d3739dd8 00:18:02.576 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:02.577 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g EA8B6A14E97B4A66A26A6A38D3739DD8 -i 00:18:02.577 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:02.835 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:03.094 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:03.094 02:39:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:03.660 nvme0n1 00:18:03.660 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:03.660 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:03.660 nvme1n2 00:18:03.918 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:03.918 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:03.918 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:03.918 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:03.918 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:03.918 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:03.918 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:03.918 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:03.918 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:04.176 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d67313ef-428c-437d-9eab-9d385a617912 == \d\6\7\3\1\3\e\f\-\4\2\8\c\-\4\3\7\d\-\9\e\a\b\-\9\d\3\8\5\a\6\1\7\9\1\2 ]] 00:18:04.176 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:04.176 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:04.176 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:04.434 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ ea8b6a14-e97b-4a66-a26a-6a38d3739dd8 == \e\a\8\b\6\a\1\4\-\e\9\7\b\-\4\a\6\6\-\a\2\6\a\-\6\a\3\8\d\3\7\3\9\d\d\8 ]] 00:18:04.434 02:39:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:04.692 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:04.692 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid d67313ef-428c-437d-9eab-9d385a617912 00:18:04.692 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:04.692 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D67313EF428C437D9EAB9D385A617912 00:18:04.692 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:04.692 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D67313EF428C437D9EAB9D385A617912 00:18:04.692 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.692 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.692 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.692 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.692 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.692 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.692 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:04.693 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:04.693 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D67313EF428C437D9EAB9D385A617912 00:18:04.950 [2024-12-16 02:39:35.503611] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:04.950 [2024-12-16 02:39:35.503645] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:04.950 [2024-12-16 02:39:35.503653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:04.950 request: 00:18:04.950 { 00:18:04.950 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.950 "namespace": { 00:18:04.951 "bdev_name": "invalid", 00:18:04.951 "nsid": 1, 00:18:04.951 "nguid": "D67313EF428C437D9EAB9D385A617912", 00:18:04.951 "no_auto_visible": false, 00:18:04.951 "hide_metadata": false 00:18:04.951 }, 00:18:04.951 "method": "nvmf_subsystem_add_ns", 00:18:04.951 "req_id": 1 00:18:04.951 } 00:18:04.951 Got JSON-RPC error response 00:18:04.951 response: 00:18:04.951 { 00:18:04.951 "code": -32602, 00:18:04.951 "message": "Invalid parameters" 00:18:04.951 } 00:18:04.951 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:04.951 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.951 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.951 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.951 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid d67313ef-428c-437d-9eab-9d385a617912 00:18:04.951 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:04.951 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D67313EF428C437D9EAB9D385A617912 -i 00:18:05.209 02:39:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:07.113 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:07.113 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:07.113 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:07.372 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:07.372 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 959397 00:18:07.372 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 959397 ']' 00:18:07.372 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 959397 00:18:07.372 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:07.372 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.372 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 959397 00:18:07.372 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:07.372 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:07.372 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 959397' 00:18:07.372 killing process with pid 959397 00:18:07.372 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 959397 00:18:07.372 02:39:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 959397 00:18:07.630 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:07.889 rmmod nvme_tcp 00:18:07.889 rmmod nvme_fabrics 00:18:07.889 rmmod nvme_keyring 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 957448 ']' 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 957448 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 957448 ']' 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 957448 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.889 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 957448 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 957448' 00:18:08.148 killing process with pid 957448 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 957448 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 957448 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:08.148 02:39:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.684 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:10.684 00:18:10.684 real 0m25.960s 00:18:10.684 user 0m31.096s 00:18:10.684 sys 0m6.995s 00:18:10.684 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:10.684 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:10.684 ************************************ 00:18:10.684 END TEST nvmf_ns_masking 00:18:10.684 ************************************ 00:18:10.684 02:39:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:10.684 02:39:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:10.684 02:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:10.684 02:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:10.684 02:39:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:10.684 ************************************ 00:18:10.684 START TEST nvmf_nvme_cli 00:18:10.684 ************************************ 00:18:10.684 02:39:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:10.684 * Looking for test storage... 00:18:10.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:10.684 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:10.684 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.684 --rc genhtml_branch_coverage=1 00:18:10.685 --rc genhtml_function_coverage=1 00:18:10.685 --rc genhtml_legend=1 00:18:10.685 --rc geninfo_all_blocks=1 00:18:10.685 --rc geninfo_unexecuted_blocks=1 00:18:10.685 00:18:10.685 ' 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:10.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.685 --rc genhtml_branch_coverage=1 00:18:10.685 --rc genhtml_function_coverage=1 00:18:10.685 --rc genhtml_legend=1 00:18:10.685 --rc geninfo_all_blocks=1 00:18:10.685 --rc geninfo_unexecuted_blocks=1 00:18:10.685 00:18:10.685 ' 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:10.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.685 --rc genhtml_branch_coverage=1 00:18:10.685 --rc genhtml_function_coverage=1 00:18:10.685 --rc genhtml_legend=1 00:18:10.685 --rc geninfo_all_blocks=1 00:18:10.685 --rc geninfo_unexecuted_blocks=1 00:18:10.685 00:18:10.685 ' 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:10.685 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:10.685 --rc genhtml_branch_coverage=1 00:18:10.685 --rc genhtml_function_coverage=1 00:18:10.685 --rc genhtml_legend=1 00:18:10.685 --rc geninfo_all_blocks=1 00:18:10.685 --rc geninfo_unexecuted_blocks=1 00:18:10.685 00:18:10.685 ' 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:10.685 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:10.685 02:39:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:17.254 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:17.254 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:17.254 Found net devices under 0000:af:00.0: cvl_0_0 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:17.254 Found net devices under 0000:af:00.1: cvl_0_1 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:17.254 02:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:17.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:18:17.254 00:18:17.254 --- 10.0.0.2 ping statistics --- 00:18:17.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.254 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:18:17.254 00:18:17.254 --- 10.0.0.1 ping statistics --- 00:18:17.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.254 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=964016 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 964016 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 964016 ']' 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.254 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.254 [2024-12-16 02:39:47.234503] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:17.254 [2024-12-16 02:39:47.234553] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.254 [2024-12-16 02:39:47.314544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:17.254 [2024-12-16 02:39:47.338700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.254 [2024-12-16 02:39:47.338739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.254 [2024-12-16 02:39:47.338746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.254 [2024-12-16 02:39:47.338752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.254 [2024-12-16 02:39:47.338757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.255 [2024-12-16 02:39:47.343866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.255 [2024-12-16 02:39:47.343893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.255 [2024-12-16 02:39:47.343998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.255 [2024-12-16 02:39:47.343999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.255 [2024-12-16 02:39:47.483907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.255 Malloc0 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.255 Malloc1 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.255 [2024-12-16 02:39:47.578580] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:17.255 00:18:17.255 Discovery Log Number of Records 2, Generation counter 2 00:18:17.255 =====Discovery Log Entry 0====== 00:18:17.255 trtype: tcp 00:18:17.255 adrfam: ipv4 00:18:17.255 subtype: current discovery subsystem 00:18:17.255 treq: not required 00:18:17.255 portid: 0 00:18:17.255 trsvcid: 4420 00:18:17.255 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:17.255 traddr: 10.0.0.2 00:18:17.255 eflags: explicit discovery connections, duplicate discovery information 00:18:17.255 sectype: none 00:18:17.255 =====Discovery Log Entry 1====== 00:18:17.255 trtype: tcp 00:18:17.255 adrfam: ipv4 00:18:17.255 subtype: nvme subsystem 00:18:17.255 treq: not required 00:18:17.255 portid: 0 00:18:17.255 trsvcid: 4420 00:18:17.255 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:17.255 traddr: 10.0.0.2 00:18:17.255 eflags: none 00:18:17.255 sectype: none 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:17.255 02:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:18.636 02:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:18.636 02:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:18.636 02:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:18.636 02:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:18.636 02:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:18.636 02:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:20.540 /dev/nvme0n2 ]] 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.540 02:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:20.540 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:20.540 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:20.541 rmmod nvme_tcp 00:18:20.541 rmmod nvme_fabrics 00:18:20.541 rmmod nvme_keyring 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:20.541 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 964016 ']' 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 964016 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 964016 ']' 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 964016 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 964016 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 964016' 00:18:20.800 killing process with pid 964016 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 964016 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 964016 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:20.800 02:39:51 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:23.337 00:18:23.337 real 0m12.605s 00:18:23.337 user 0m18.087s 00:18:23.337 sys 0m5.061s 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:23.337 ************************************ 00:18:23.337 END TEST nvmf_nvme_cli 00:18:23.337 ************************************ 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:23.337 ************************************ 00:18:23.337 START TEST nvmf_vfio_user 00:18:23.337 ************************************ 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:23.337 * Looking for test storage... 00:18:23.337 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:23.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.337 --rc genhtml_branch_coverage=1 00:18:23.337 --rc genhtml_function_coverage=1 00:18:23.337 --rc genhtml_legend=1 00:18:23.337 --rc geninfo_all_blocks=1 00:18:23.337 --rc geninfo_unexecuted_blocks=1 00:18:23.337 00:18:23.337 ' 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:23.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.337 --rc genhtml_branch_coverage=1 00:18:23.337 --rc genhtml_function_coverage=1 00:18:23.337 --rc genhtml_legend=1 00:18:23.337 --rc geninfo_all_blocks=1 00:18:23.337 --rc geninfo_unexecuted_blocks=1 00:18:23.337 00:18:23.337 ' 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:23.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.337 --rc genhtml_branch_coverage=1 00:18:23.337 --rc genhtml_function_coverage=1 00:18:23.337 --rc genhtml_legend=1 00:18:23.337 --rc geninfo_all_blocks=1 00:18:23.337 --rc geninfo_unexecuted_blocks=1 00:18:23.337 00:18:23.337 ' 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:23.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:23.337 --rc genhtml_branch_coverage=1 00:18:23.337 --rc genhtml_function_coverage=1 00:18:23.337 --rc genhtml_legend=1 00:18:23.337 --rc geninfo_all_blocks=1 00:18:23.337 --rc geninfo_unexecuted_blocks=1 00:18:23.337 00:18:23.337 ' 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:23.337 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:23.338 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=965269 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 965269' 00:18:23.338 Process pid: 965269 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 965269 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 965269 ']' 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.338 02:39:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:23.338 [2024-12-16 02:39:53.875372] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:23.338 [2024-12-16 02:39:53.875421] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.338 [2024-12-16 02:39:53.949339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:23.338 [2024-12-16 02:39:53.971971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:23.338 [2024-12-16 02:39:53.972007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:23.338 [2024-12-16 02:39:53.972014] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:23.338 [2024-12-16 02:39:53.972021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:23.338 [2024-12-16 02:39:53.972026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:23.338 [2024-12-16 02:39:53.973363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.338 [2024-12-16 02:39:53.973398] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:23.338 [2024-12-16 02:39:53.973432] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.338 [2024-12-16 02:39:53.973433] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:23.597 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:23.597 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:23.597 02:39:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:24.533 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:24.791 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:24.791 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:24.791 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:24.791 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:24.791 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:25.049 Malloc1 00:18:25.049 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:25.049 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:25.308 02:39:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:25.566 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:25.566 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:25.566 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:25.825 Malloc2 00:18:25.825 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:26.084 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:26.084 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:26.343 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:26.343 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:26.343 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:26.343 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:26.343 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:26.343 02:39:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:26.343 [2024-12-16 02:39:56.951259] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:26.343 [2024-12-16 02:39:56.951293] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid965742 ] 00:18:26.343 [2024-12-16 02:39:56.993315] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:26.343 [2024-12-16 02:39:56.995732] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:26.343 [2024-12-16 02:39:56.995753] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6f34a95000 00:18:26.343 [2024-12-16 02:39:56.996728] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.343 [2024-12-16 02:39:56.997729] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.343 [2024-12-16 02:39:56.998740] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.343 [2024-12-16 02:39:56.999747] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:26.343 [2024-12-16 02:39:57.000753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:26.343 [2024-12-16 02:39:57.001755] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.604 [2024-12-16 02:39:57.002763] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:26.604 [2024-12-16 02:39:57.003768] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:26.604 [2024-12-16 02:39:57.004772] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:26.604 [2024-12-16 02:39:57.004781] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6f3379f000 00:18:26.604 [2024-12-16 02:39:57.005699] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:26.604 [2024-12-16 02:39:57.018108] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:26.604 [2024-12-16 02:39:57.018131] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:26.604 [2024-12-16 02:39:57.023884] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:26.604 [2024-12-16 02:39:57.023920] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:26.604 [2024-12-16 02:39:57.023995] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:26.604 [2024-12-16 02:39:57.024011] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:26.604 [2024-12-16 02:39:57.024016] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:26.604 [2024-12-16 02:39:57.024879] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:26.604 [2024-12-16 02:39:57.024887] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:26.604 [2024-12-16 02:39:57.024893] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:26.604 [2024-12-16 02:39:57.025883] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:26.604 [2024-12-16 02:39:57.025890] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:26.604 [2024-12-16 02:39:57.025897] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:26.604 [2024-12-16 02:39:57.026891] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:26.604 [2024-12-16 02:39:57.026899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:26.604 [2024-12-16 02:39:57.027896] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:26.604 [2024-12-16 02:39:57.027903] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:26.604 [2024-12-16 02:39:57.027908] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:26.604 [2024-12-16 02:39:57.027914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:26.604 [2024-12-16 02:39:57.028021] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:26.604 [2024-12-16 02:39:57.028025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:26.604 [2024-12-16 02:39:57.028030] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:26.604 [2024-12-16 02:39:57.028904] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:26.604 [2024-12-16 02:39:57.029912] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:26.604 [2024-12-16 02:39:57.030915] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:26.604 [2024-12-16 02:39:57.031917] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:26.604 [2024-12-16 02:39:57.032003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:26.604 [2024-12-16 02:39:57.032933] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:26.604 [2024-12-16 02:39:57.032940] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:26.604 [2024-12-16 02:39:57.032944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:26.604 [2024-12-16 02:39:57.032960] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:26.604 [2024-12-16 02:39:57.032967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:26.604 [2024-12-16 02:39:57.032981] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:26.604 [2024-12-16 02:39:57.032986] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:26.604 [2024-12-16 02:39:57.032989] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.604 [2024-12-16 02:39:57.033001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:26.604 [2024-12-16 02:39:57.033055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:26.604 [2024-12-16 02:39:57.033063] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:26.604 [2024-12-16 02:39:57.033067] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:26.604 [2024-12-16 02:39:57.033071] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:26.604 [2024-12-16 02:39:57.033075] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:26.604 [2024-12-16 02:39:57.033080] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:26.604 [2024-12-16 02:39:57.033084] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:26.604 [2024-12-16 02:39:57.033088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:26.604 [2024-12-16 02:39:57.033096] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:26.604 [2024-12-16 02:39:57.033106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:26.604 [2024-12-16 02:39:57.033122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:26.604 [2024-12-16 02:39:57.033131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.605 [2024-12-16 02:39:57.033139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.605 [2024-12-16 02:39:57.033146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.605 [2024-12-16 02:39:57.033155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:26.605 [2024-12-16 02:39:57.033159] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:26.605 [2024-12-16 02:39:57.033184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:26.605 [2024-12-16 02:39:57.033189] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:26.605 [2024-12-16 02:39:57.033193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033199] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033204] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033211] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:26.605 [2024-12-16 02:39:57.033225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:26.605 [2024-12-16 02:39:57.033272] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033287] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:26.605 [2024-12-16 02:39:57.033291] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:26.605 [2024-12-16 02:39:57.033294] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.605 [2024-12-16 02:39:57.033300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:26.605 [2024-12-16 02:39:57.033311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:26.605 [2024-12-16 02:39:57.033319] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:26.605 [2024-12-16 02:39:57.033329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033336] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033341] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:26.605 [2024-12-16 02:39:57.033345] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:26.605 [2024-12-16 02:39:57.033348] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.605 [2024-12-16 02:39:57.033353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:26.605 [2024-12-16 02:39:57.033375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:26.605 [2024-12-16 02:39:57.033385] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033397] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:26.605 [2024-12-16 02:39:57.033401] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:26.605 [2024-12-16 02:39:57.033404] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.605 [2024-12-16 02:39:57.033410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:26.605 [2024-12-16 02:39:57.033421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:26.605 [2024-12-16 02:39:57.033428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033445] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033450] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033458] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:26.605 [2024-12-16 02:39:57.033462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:26.605 [2024-12-16 02:39:57.033467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:26.605 [2024-12-16 02:39:57.033483] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:26.605 [2024-12-16 02:39:57.033494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:26.605 [2024-12-16 02:39:57.033504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:26.605 [2024-12-16 02:39:57.033512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:26.605 [2024-12-16 02:39:57.033521] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:26.605 [2024-12-16 02:39:57.033531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:26.605 [2024-12-16 02:39:57.033541] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:26.605 [2024-12-16 02:39:57.033547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:26.605 [2024-12-16 02:39:57.033560] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:26.605 [2024-12-16 02:39:57.033565] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:26.605 [2024-12-16 02:39:57.033568] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:26.605 [2024-12-16 02:39:57.033571] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:26.605 [2024-12-16 02:39:57.033574] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:26.605 [2024-12-16 02:39:57.033579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:26.605 [2024-12-16 02:39:57.033585] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:26.605 [2024-12-16 02:39:57.033589] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:26.605 [2024-12-16 02:39:57.033592] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.605 [2024-12-16 02:39:57.033597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:26.605 [2024-12-16 02:39:57.033603] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:26.605 [2024-12-16 02:39:57.033607] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:26.605 [2024-12-16 02:39:57.033610] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.605 [2024-12-16 02:39:57.033615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:26.605 [2024-12-16 02:39:57.033622] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:26.605 [2024-12-16 02:39:57.033625] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:26.605 [2024-12-16 02:39:57.033628] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:26.605 [2024-12-16 02:39:57.033633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:26.605 [2024-12-16 02:39:57.033639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:26.605 [2024-12-16 02:39:57.033650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:26.605 [2024-12-16 02:39:57.033661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:26.605 [2024-12-16 02:39:57.033667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:26.605 ===================================================== 00:18:26.605 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:26.605 ===================================================== 00:18:26.605 Controller Capabilities/Features 00:18:26.605 ================================ 00:18:26.605 Vendor ID: 4e58 00:18:26.605 Subsystem Vendor ID: 4e58 00:18:26.605 Serial Number: SPDK1 00:18:26.605 Model Number: SPDK bdev Controller 00:18:26.605 Firmware Version: 25.01 00:18:26.605 Recommended Arb Burst: 6 00:18:26.605 IEEE OUI Identifier: 8d 6b 50 00:18:26.605 Multi-path I/O 00:18:26.606 May have multiple subsystem ports: Yes 00:18:26.606 May have multiple controllers: Yes 00:18:26.606 Associated with SR-IOV VF: No 00:18:26.606 Max Data Transfer Size: 131072 00:18:26.606 Max Number of Namespaces: 32 00:18:26.606 Max Number of I/O Queues: 127 00:18:26.606 NVMe Specification Version (VS): 1.3 00:18:26.606 NVMe Specification Version (Identify): 1.3 00:18:26.606 Maximum Queue Entries: 256 00:18:26.606 Contiguous Queues Required: Yes 00:18:26.606 Arbitration Mechanisms Supported 00:18:26.606 Weighted Round Robin: Not Supported 00:18:26.606 Vendor Specific: Not Supported 00:18:26.606 Reset Timeout: 15000 ms 00:18:26.606 Doorbell Stride: 4 bytes 00:18:26.606 NVM Subsystem Reset: Not Supported 00:18:26.606 Command Sets Supported 00:18:26.606 NVM Command Set: Supported 00:18:26.606 Boot Partition: Not Supported 00:18:26.606 Memory Page Size Minimum: 4096 bytes 00:18:26.606 Memory Page Size Maximum: 4096 bytes 00:18:26.606 Persistent Memory Region: Not Supported 00:18:26.606 Optional Asynchronous Events Supported 00:18:26.606 Namespace Attribute Notices: Supported 00:18:26.606 Firmware Activation Notices: Not Supported 00:18:26.606 ANA Change Notices: Not Supported 00:18:26.606 PLE Aggregate Log Change Notices: Not Supported 00:18:26.606 LBA Status Info Alert Notices: Not Supported 00:18:26.606 EGE Aggregate Log Change Notices: Not Supported 00:18:26.606 Normal NVM Subsystem Shutdown event: Not Supported 00:18:26.606 Zone Descriptor Change Notices: Not Supported 00:18:26.606 Discovery Log Change Notices: Not Supported 00:18:26.606 Controller Attributes 00:18:26.606 128-bit Host Identifier: Supported 00:18:26.606 Non-Operational Permissive Mode: Not Supported 00:18:26.606 NVM Sets: Not Supported 00:18:26.606 Read Recovery Levels: Not Supported 00:18:26.606 Endurance Groups: Not Supported 00:18:26.606 Predictable Latency Mode: Not Supported 00:18:26.606 Traffic Based Keep ALive: Not Supported 00:18:26.606 Namespace Granularity: Not Supported 00:18:26.606 SQ Associations: Not Supported 00:18:26.606 UUID List: Not Supported 00:18:26.606 Multi-Domain Subsystem: Not Supported 00:18:26.606 Fixed Capacity Management: Not Supported 00:18:26.606 Variable Capacity Management: Not Supported 00:18:26.606 Delete Endurance Group: Not Supported 00:18:26.606 Delete NVM Set: Not Supported 00:18:26.606 Extended LBA Formats Supported: Not Supported 00:18:26.606 Flexible Data Placement Supported: Not Supported 00:18:26.606 00:18:26.606 Controller Memory Buffer Support 00:18:26.606 ================================ 00:18:26.606 Supported: No 00:18:26.606 00:18:26.606 Persistent Memory Region Support 00:18:26.606 ================================ 00:18:26.606 Supported: No 00:18:26.606 00:18:26.606 Admin Command Set Attributes 00:18:26.606 ============================ 00:18:26.606 Security Send/Receive: Not Supported 00:18:26.606 Format NVM: Not Supported 00:18:26.606 Firmware Activate/Download: Not Supported 00:18:26.606 Namespace Management: Not Supported 00:18:26.606 Device Self-Test: Not Supported 00:18:26.606 Directives: Not Supported 00:18:26.606 NVMe-MI: Not Supported 00:18:26.606 Virtualization Management: Not Supported 00:18:26.606 Doorbell Buffer Config: Not Supported 00:18:26.606 Get LBA Status Capability: Not Supported 00:18:26.606 Command & Feature Lockdown Capability: Not Supported 00:18:26.606 Abort Command Limit: 4 00:18:26.606 Async Event Request Limit: 4 00:18:26.606 Number of Firmware Slots: N/A 00:18:26.606 Firmware Slot 1 Read-Only: N/A 00:18:26.606 Firmware Activation Without Reset: N/A 00:18:26.606 Multiple Update Detection Support: N/A 00:18:26.606 Firmware Update Granularity: No Information Provided 00:18:26.606 Per-Namespace SMART Log: No 00:18:26.606 Asymmetric Namespace Access Log Page: Not Supported 00:18:26.606 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:26.606 Command Effects Log Page: Supported 00:18:26.606 Get Log Page Extended Data: Supported 00:18:26.606 Telemetry Log Pages: Not Supported 00:18:26.606 Persistent Event Log Pages: Not Supported 00:18:26.606 Supported Log Pages Log Page: May Support 00:18:26.606 Commands Supported & Effects Log Page: Not Supported 00:18:26.606 Feature Identifiers & Effects Log Page:May Support 00:18:26.606 NVMe-MI Commands & Effects Log Page: May Support 00:18:26.606 Data Area 4 for Telemetry Log: Not Supported 00:18:26.606 Error Log Page Entries Supported: 128 00:18:26.606 Keep Alive: Supported 00:18:26.606 Keep Alive Granularity: 10000 ms 00:18:26.606 00:18:26.606 NVM Command Set Attributes 00:18:26.606 ========================== 00:18:26.606 Submission Queue Entry Size 00:18:26.606 Max: 64 00:18:26.606 Min: 64 00:18:26.606 Completion Queue Entry Size 00:18:26.606 Max: 16 00:18:26.606 Min: 16 00:18:26.606 Number of Namespaces: 32 00:18:26.606 Compare Command: Supported 00:18:26.606 Write Uncorrectable Command: Not Supported 00:18:26.606 Dataset Management Command: Supported 00:18:26.606 Write Zeroes Command: Supported 00:18:26.606 Set Features Save Field: Not Supported 00:18:26.606 Reservations: Not Supported 00:18:26.606 Timestamp: Not Supported 00:18:26.606 Copy: Supported 00:18:26.606 Volatile Write Cache: Present 00:18:26.606 Atomic Write Unit (Normal): 1 00:18:26.606 Atomic Write Unit (PFail): 1 00:18:26.606 Atomic Compare & Write Unit: 1 00:18:26.606 Fused Compare & Write: Supported 00:18:26.606 Scatter-Gather List 00:18:26.606 SGL Command Set: Supported (Dword aligned) 00:18:26.606 SGL Keyed: Not Supported 00:18:26.606 SGL Bit Bucket Descriptor: Not Supported 00:18:26.606 SGL Metadata Pointer: Not Supported 00:18:26.606 Oversized SGL: Not Supported 00:18:26.606 SGL Metadata Address: Not Supported 00:18:26.606 SGL Offset: Not Supported 00:18:26.606 Transport SGL Data Block: Not Supported 00:18:26.606 Replay Protected Memory Block: Not Supported 00:18:26.606 00:18:26.606 Firmware Slot Information 00:18:26.606 ========================= 00:18:26.606 Active slot: 1 00:18:26.606 Slot 1 Firmware Revision: 25.01 00:18:26.606 00:18:26.606 00:18:26.606 Commands Supported and Effects 00:18:26.606 ============================== 00:18:26.606 Admin Commands 00:18:26.606 -------------- 00:18:26.606 Get Log Page (02h): Supported 00:18:26.606 Identify (06h): Supported 00:18:26.606 Abort (08h): Supported 00:18:26.606 Set Features (09h): Supported 00:18:26.606 Get Features (0Ah): Supported 00:18:26.606 Asynchronous Event Request (0Ch): Supported 00:18:26.606 Keep Alive (18h): Supported 00:18:26.606 I/O Commands 00:18:26.606 ------------ 00:18:26.606 Flush (00h): Supported LBA-Change 00:18:26.606 Write (01h): Supported LBA-Change 00:18:26.606 Read (02h): Supported 00:18:26.606 Compare (05h): Supported 00:18:26.606 Write Zeroes (08h): Supported LBA-Change 00:18:26.606 Dataset Management (09h): Supported LBA-Change 00:18:26.606 Copy (19h): Supported LBA-Change 00:18:26.606 00:18:26.606 Error Log 00:18:26.606 ========= 00:18:26.606 00:18:26.606 Arbitration 00:18:26.606 =========== 00:18:26.606 Arbitration Burst: 1 00:18:26.606 00:18:26.606 Power Management 00:18:26.606 ================ 00:18:26.606 Number of Power States: 1 00:18:26.606 Current Power State: Power State #0 00:18:26.606 Power State #0: 00:18:26.606 Max Power: 0.00 W 00:18:26.606 Non-Operational State: Operational 00:18:26.606 Entry Latency: Not Reported 00:18:26.606 Exit Latency: Not Reported 00:18:26.606 Relative Read Throughput: 0 00:18:26.606 Relative Read Latency: 0 00:18:26.606 Relative Write Throughput: 0 00:18:26.606 Relative Write Latency: 0 00:18:26.606 Idle Power: Not Reported 00:18:26.606 Active Power: Not Reported 00:18:26.606 Non-Operational Permissive Mode: Not Supported 00:18:26.606 00:18:26.606 Health Information 00:18:26.606 ================== 00:18:26.606 Critical Warnings: 00:18:26.606 Available Spare Space: OK 00:18:26.606 Temperature: OK 00:18:26.606 Device Reliability: OK 00:18:26.606 Read Only: No 00:18:26.606 Volatile Memory Backup: OK 00:18:26.606 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:26.606 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:26.606 Available Spare: 0% 00:18:26.606 Available Sp[2024-12-16 02:39:57.033764] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:26.606 [2024-12-16 02:39:57.033773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:26.606 [2024-12-16 02:39:57.033797] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:26.606 [2024-12-16 02:39:57.033805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.607 [2024-12-16 02:39:57.033811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.607 [2024-12-16 02:39:57.033816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.607 [2024-12-16 02:39:57.033821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:26.607 [2024-12-16 02:39:57.033940] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:26.607 [2024-12-16 02:39:57.033951] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:26.607 [2024-12-16 02:39:57.034947] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:26.607 [2024-12-16 02:39:57.034994] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:26.607 [2024-12-16 02:39:57.035000] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:26.607 [2024-12-16 02:39:57.035947] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:26.607 [2024-12-16 02:39:57.035956] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:26.607 [2024-12-16 02:39:57.036006] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:26.607 [2024-12-16 02:39:57.038853] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:26.607 are Threshold: 0% 00:18:26.607 Life Percentage Used: 0% 00:18:26.607 Data Units Read: 0 00:18:26.607 Data Units Written: 0 00:18:26.607 Host Read Commands: 0 00:18:26.607 Host Write Commands: 0 00:18:26.607 Controller Busy Time: 0 minutes 00:18:26.607 Power Cycles: 0 00:18:26.607 Power On Hours: 0 hours 00:18:26.607 Unsafe Shutdowns: 0 00:18:26.607 Unrecoverable Media Errors: 0 00:18:26.607 Lifetime Error Log Entries: 0 00:18:26.607 Warning Temperature Time: 0 minutes 00:18:26.607 Critical Temperature Time: 0 minutes 00:18:26.607 00:18:26.607 Number of Queues 00:18:26.607 ================ 00:18:26.607 Number of I/O Submission Queues: 127 00:18:26.607 Number of I/O Completion Queues: 127 00:18:26.607 00:18:26.607 Active Namespaces 00:18:26.607 ================= 00:18:26.607 Namespace ID:1 00:18:26.607 Error Recovery Timeout: Unlimited 00:18:26.607 Command Set Identifier: NVM (00h) 00:18:26.607 Deallocate: Supported 00:18:26.607 Deallocated/Unwritten Error: Not Supported 00:18:26.607 Deallocated Read Value: Unknown 00:18:26.607 Deallocate in Write Zeroes: Not Supported 00:18:26.607 Deallocated Guard Field: 0xFFFF 00:18:26.607 Flush: Supported 00:18:26.607 Reservation: Supported 00:18:26.607 Namespace Sharing Capabilities: Multiple Controllers 00:18:26.607 Size (in LBAs): 131072 (0GiB) 00:18:26.607 Capacity (in LBAs): 131072 (0GiB) 00:18:26.607 Utilization (in LBAs): 131072 (0GiB) 00:18:26.607 NGUID: 1D6E60B5D3A44BE4B15AC6094E0A88BB 00:18:26.607 UUID: 1d6e60b5-d3a4-4be4-b15a-c6094e0a88bb 00:18:26.607 Thin Provisioning: Not Supported 00:18:26.607 Per-NS Atomic Units: Yes 00:18:26.607 Atomic Boundary Size (Normal): 0 00:18:26.607 Atomic Boundary Size (PFail): 0 00:18:26.607 Atomic Boundary Offset: 0 00:18:26.607 Maximum Single Source Range Length: 65535 00:18:26.607 Maximum Copy Length: 65535 00:18:26.607 Maximum Source Range Count: 1 00:18:26.607 NGUID/EUI64 Never Reused: No 00:18:26.607 Namespace Write Protected: No 00:18:26.607 Number of LBA Formats: 1 00:18:26.607 Current LBA Format: LBA Format #00 00:18:26.607 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:26.607 00:18:26.607 02:39:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:26.866 [2024-12-16 02:39:57.267973] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:32.137 Initializing NVMe Controllers 00:18:32.137 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:32.137 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:32.137 Initialization complete. Launching workers. 00:18:32.137 ======================================================== 00:18:32.137 Latency(us) 00:18:32.137 Device Information : IOPS MiB/s Average min max 00:18:32.137 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39878.76 155.78 3209.57 974.93 9109.98 00:18:32.137 ======================================================== 00:18:32.137 Total : 39878.76 155.78 3209.57 974.93 9109.98 00:18:32.137 00:18:32.137 [2024-12-16 02:40:02.289126] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:32.137 02:40:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:32.137 [2024-12-16 02:40:02.524236] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:37.624 Initializing NVMe Controllers 00:18:37.624 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:37.624 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:37.624 Initialization complete. Launching workers. 00:18:37.624 ======================================================== 00:18:37.624 Latency(us) 00:18:37.624 Device Information : IOPS MiB/s Average min max 00:18:37.624 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16057.30 62.72 7976.85 5983.23 9977.56 00:18:37.624 ======================================================== 00:18:37.624 Total : 16057.30 62.72 7976.85 5983.23 9977.56 00:18:37.624 00:18:37.624 [2024-12-16 02:40:07.568583] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:37.624 02:40:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:37.624 [2024-12-16 02:40:07.771544] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:42.894 [2024-12-16 02:40:12.877345] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:42.894 Initializing NVMe Controllers 00:18:42.894 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:42.894 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:42.894 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:42.894 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:42.894 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:42.894 Initialization complete. Launching workers. 00:18:42.894 Starting thread on core 2 00:18:42.894 Starting thread on core 3 00:18:42.894 Starting thread on core 1 00:18:42.894 02:40:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:42.894 [2024-12-16 02:40:13.177257] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:46.182 [2024-12-16 02:40:16.240060] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:46.182 Initializing NVMe Controllers 00:18:46.182 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:46.182 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:46.182 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:46.182 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:46.182 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:46.182 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:46.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:46.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:46.182 Initialization complete. Launching workers. 00:18:46.182 Starting thread on core 1 with urgent priority queue 00:18:46.182 Starting thread on core 2 with urgent priority queue 00:18:46.182 Starting thread on core 3 with urgent priority queue 00:18:46.182 Starting thread on core 0 with urgent priority queue 00:18:46.182 SPDK bdev Controller (SPDK1 ) core 0: 8210.67 IO/s 12.18 secs/100000 ios 00:18:46.182 SPDK bdev Controller (SPDK1 ) core 1: 9111.33 IO/s 10.98 secs/100000 ios 00:18:46.182 SPDK bdev Controller (SPDK1 ) core 2: 9458.33 IO/s 10.57 secs/100000 ios 00:18:46.182 SPDK bdev Controller (SPDK1 ) core 3: 8640.00 IO/s 11.57 secs/100000 ios 00:18:46.182 ======================================================== 00:18:46.182 00:18:46.182 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:46.182 [2024-12-16 02:40:16.528296] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:46.182 Initializing NVMe Controllers 00:18:46.182 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:46.182 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:46.182 Namespace ID: 1 size: 0GB 00:18:46.182 Initialization complete. 00:18:46.182 INFO: using host memory buffer for IO 00:18:46.182 Hello world! 00:18:46.182 [2024-12-16 02:40:16.564519] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:46.182 02:40:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:46.441 [2024-12-16 02:40:16.842242] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:47.376 Initializing NVMe Controllers 00:18:47.376 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:47.376 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:47.376 Initialization complete. Launching workers. 00:18:47.376 submit (in ns) avg, min, max = 7840.3, 3200.0, 3999531.4 00:18:47.376 complete (in ns) avg, min, max = 22524.4, 1756.2, 4993505.7 00:18:47.376 00:18:47.376 Submit histogram 00:18:47.376 ================ 00:18:47.376 Range in us Cumulative Count 00:18:47.376 3.200 - 3.215: 0.0921% ( 15) 00:18:47.376 3.215 - 3.230: 0.3377% ( 40) 00:18:47.376 3.230 - 3.246: 0.7552% ( 68) 00:18:47.376 3.246 - 3.261: 1.7437% ( 161) 00:18:47.376 3.261 - 3.276: 3.9786% ( 364) 00:18:47.376 3.276 - 3.291: 8.7432% ( 776) 00:18:47.376 3.291 - 3.307: 14.6988% ( 970) 00:18:47.376 3.307 - 3.322: 21.2501% ( 1067) 00:18:47.376 3.322 - 3.337: 28.6855% ( 1211) 00:18:47.376 3.337 - 3.352: 35.5989% ( 1126) 00:18:47.376 3.352 - 3.368: 41.2169% ( 915) 00:18:47.376 3.368 - 3.383: 45.8771% ( 759) 00:18:47.376 3.383 - 3.398: 50.7521% ( 794) 00:18:47.376 3.398 - 3.413: 54.8904% ( 674) 00:18:47.376 3.413 - 3.429: 59.1085% ( 687) 00:18:47.376 3.429 - 3.444: 65.5431% ( 1048) 00:18:47.376 3.444 - 3.459: 71.5908% ( 985) 00:18:47.376 3.459 - 3.474: 76.8220% ( 852) 00:18:47.376 3.474 - 3.490: 81.5374% ( 768) 00:18:47.376 3.490 - 3.505: 84.6565% ( 508) 00:18:47.376 3.505 - 3.520: 86.2713% ( 263) 00:18:47.376 3.520 - 3.535: 87.1922% ( 150) 00:18:47.376 3.535 - 3.550: 87.7387% ( 89) 00:18:47.376 3.550 - 3.566: 88.1685% ( 70) 00:18:47.376 3.566 - 3.581: 88.7640% ( 97) 00:18:47.376 3.581 - 3.596: 89.4701% ( 115) 00:18:47.376 3.596 - 3.611: 90.4157% ( 154) 00:18:47.376 3.611 - 3.627: 91.3305% ( 149) 00:18:47.376 3.627 - 3.642: 92.3497% ( 166) 00:18:47.376 3.642 - 3.657: 93.1663% ( 133) 00:18:47.376 3.657 - 3.672: 94.0075% ( 137) 00:18:47.376 3.672 - 3.688: 94.8732% ( 141) 00:18:47.376 3.688 - 3.703: 95.9047% ( 168) 00:18:47.376 3.703 - 3.718: 96.7643% ( 140) 00:18:47.376 3.718 - 3.733: 97.5011% ( 120) 00:18:47.376 3.733 - 3.749: 98.1273% ( 102) 00:18:47.376 3.749 - 3.764: 98.5387% ( 67) 00:18:47.376 3.764 - 3.779: 98.8825% ( 56) 00:18:47.376 3.779 - 3.794: 99.1527% ( 44) 00:18:47.376 3.794 - 3.810: 99.3983% ( 40) 00:18:47.376 3.810 - 3.825: 99.4781% ( 13) 00:18:47.376 3.825 - 3.840: 99.5579% ( 13) 00:18:47.376 3.840 - 3.855: 99.5825% ( 4) 00:18:47.376 3.855 - 3.870: 99.6193% ( 6) 00:18:47.376 3.870 - 3.886: 99.6255% ( 1) 00:18:47.376 3.886 - 3.901: 99.6316% ( 1) 00:18:47.376 3.901 - 3.931: 99.6377% ( 1) 00:18:47.376 5.211 - 5.242: 99.6439% ( 1) 00:18:47.376 5.272 - 5.303: 99.6500% ( 1) 00:18:47.376 5.486 - 5.516: 99.6562% ( 1) 00:18:47.376 5.516 - 5.547: 99.6684% ( 2) 00:18:47.376 5.547 - 5.577: 99.6746% ( 1) 00:18:47.376 5.699 - 5.730: 99.6807% ( 1) 00:18:47.376 5.790 - 5.821: 99.6869% ( 1) 00:18:47.376 5.882 - 5.912: 99.6930% ( 1) 00:18:47.376 5.912 - 5.943: 99.6991% ( 1) 00:18:47.376 5.973 - 6.004: 99.7053% ( 1) 00:18:47.376 6.004 - 6.034: 99.7114% ( 1) 00:18:47.376 6.126 - 6.156: 99.7237% ( 2) 00:18:47.376 6.309 - 6.339: 99.7298% ( 1) 00:18:47.376 6.339 - 6.370: 99.7360% ( 1) 00:18:47.376 6.370 - 6.400: 99.7421% ( 1) 00:18:47.376 6.400 - 6.430: 99.7483% ( 1) 00:18:47.376 6.430 - 6.461: 99.7544% ( 1) 00:18:47.376 6.613 - 6.644: 99.7605% ( 1) 00:18:47.376 6.857 - 6.888: 99.7667% ( 1) 00:18:47.376 6.888 - 6.918: 99.7728% ( 1) 00:18:47.376 7.131 - 7.162: 99.7790% ( 1) 00:18:47.376 7.162 - 7.192: 99.7851% ( 1) 00:18:47.376 7.192 - 7.223: 99.7912% ( 1) 00:18:47.376 7.223 - 7.253: 99.7974% ( 1) 00:18:47.376 7.253 - 7.284: 99.8035% ( 1) 00:18:47.376 7.345 - 7.375: 99.8097% ( 1) 00:18:47.376 7.497 - 7.528: 99.8158% ( 1) 00:18:47.376 7.528 - 7.558: 99.8219% ( 1) 00:18:47.376 7.589 - 7.619: 99.8281% ( 1) 00:18:47.376 7.802 - 7.863: 99.8342% ( 1) 00:18:47.376 7.863 - 7.924: 99.8465% ( 2) 00:18:47.376 7.985 - 8.046: 99.8526% ( 1) 00:18:47.376 [2024-12-16 02:40:17.864283] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:47.376 8.107 - 8.168: 99.8588% ( 1) 00:18:47.376 8.229 - 8.290: 99.8649% ( 1) 00:18:47.376 8.533 - 8.594: 99.8711% ( 1) 00:18:47.376 9.204 - 9.265: 99.8833% ( 2) 00:18:47.376 9.387 - 9.448: 99.8895% ( 1) 00:18:47.376 3994.575 - 4025.783: 100.0000% ( 18) 00:18:47.376 00:18:47.376 Complete histogram 00:18:47.376 ================== 00:18:47.376 Range in us Cumulative Count 00:18:47.376 1.752 - 1.760: 0.0184% ( 3) 00:18:47.376 1.760 - 1.768: 0.0614% ( 7) 00:18:47.376 1.768 - 1.775: 0.1535% ( 15) 00:18:47.376 1.775 - 1.783: 0.2333% ( 13) 00:18:47.376 1.783 - 1.790: 0.2763% ( 7) 00:18:47.376 1.790 - 1.798: 0.3070% ( 5) 00:18:47.376 1.798 - 1.806: 0.4052% ( 16) 00:18:47.376 1.806 - 1.813: 1.7867% ( 225) 00:18:47.376 1.813 - 1.821: 7.4108% ( 916) 00:18:47.376 1.821 - 1.829: 18.1249% ( 1745) 00:18:47.376 1.829 - 1.836: 29.4161% ( 1839) 00:18:47.376 1.836 - 1.844: 39.5039% ( 1643) 00:18:47.376 1.844 - 1.851: 52.3362% ( 2090) 00:18:47.376 1.851 - 1.859: 66.6789% ( 2336) 00:18:47.376 1.859 - 1.867: 79.1245% ( 2027) 00:18:47.376 1.867 - 1.874: 87.4317% ( 1353) 00:18:47.376 1.874 - 1.882: 91.8892% ( 726) 00:18:47.376 1.882 - 1.890: 94.1426% ( 367) 00:18:47.376 1.890 - 1.897: 95.4381% ( 211) 00:18:47.376 1.897 - 1.905: 96.2178% ( 127) 00:18:47.376 1.905 - 1.912: 96.9239% ( 115) 00:18:47.376 1.912 - 1.920: 97.4949% ( 93) 00:18:47.376 1.920 - 1.928: 98.0475% ( 90) 00:18:47.376 1.928 - 1.935: 98.4159% ( 60) 00:18:47.376 1.935 - 1.943: 98.7782% ( 59) 00:18:47.376 1.943 - 1.950: 98.9992% ( 36) 00:18:47.376 1.950 - 1.966: 99.1957% ( 32) 00:18:47.376 1.966 - 1.981: 99.2448% ( 8) 00:18:47.376 1.981 - 1.996: 99.2632% ( 3) 00:18:47.376 1.996 - 2.011: 99.2755% ( 2) 00:18:47.376 2.072 - 2.088: 99.2816% ( 1) 00:18:47.376 2.301 - 2.316: 99.2878% ( 1) 00:18:47.376 3.855 - 3.870: 99.2939% ( 1) 00:18:47.376 3.962 - 3.992: 99.3001% ( 1) 00:18:47.376 4.023 - 4.053: 99.3062% ( 1) 00:18:47.376 4.328 - 4.358: 99.3123% ( 1) 00:18:47.376 4.632 - 4.663: 99.3185% ( 1) 00:18:47.376 4.724 - 4.754: 99.3246% ( 1) 00:18:47.376 4.785 - 4.815: 99.3308% ( 1) 00:18:47.376 5.029 - 5.059: 99.3369% ( 1) 00:18:47.376 5.120 - 5.150: 99.3430% ( 1) 00:18:47.376 5.211 - 5.242: 99.3492% ( 1) 00:18:47.377 5.242 - 5.272: 99.3553% ( 1) 00:18:47.377 5.516 - 5.547: 99.3615% ( 1) 00:18:47.377 5.760 - 5.790: 99.3676% ( 1) 00:18:47.377 5.790 - 5.821: 99.3737% ( 1) 00:18:47.377 5.882 - 5.912: 99.3799% ( 1) 00:18:47.377 6.004 - 6.034: 99.3860% ( 1) 00:18:47.377 6.065 - 6.095: 99.3922% ( 1) 00:18:47.377 6.126 - 6.156: 99.3983% ( 1) 00:18:47.377 6.156 - 6.187: 99.4044% ( 1) 00:18:47.377 6.217 - 6.248: 99.4106% ( 1) 00:18:47.377 6.309 - 6.339: 99.4167% ( 1) 00:18:47.377 6.339 - 6.370: 99.4229% ( 1) 00:18:47.377 6.400 - 6.430: 99.4290% ( 1) 00:18:47.377 6.430 - 6.461: 99.4413% ( 2) 00:18:47.377 6.979 - 7.010: 99.4474% ( 1) 00:18:47.377 7.070 - 7.101: 99.4536% ( 1) 00:18:47.377 7.406 - 7.436: 99.4597% ( 1) 00:18:47.377 8.229 - 8.290: 99.4658% ( 1) 00:18:47.377 9.387 - 9.448: 99.4720% ( 1) 00:18:47.377 11.947 - 12.008: 99.4781% ( 1) 00:18:47.377 13.288 - 13.349: 99.4843% ( 1) 00:18:47.377 3994.575 - 4025.783: 99.9939% ( 83) 00:18:47.377 4993.219 - 5024.427: 100.0000% ( 1) 00:18:47.377 00:18:47.377 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:47.377 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:47.377 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:47.377 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:47.377 02:40:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:47.636 [ 00:18:47.636 { 00:18:47.636 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:47.636 "subtype": "Discovery", 00:18:47.636 "listen_addresses": [], 00:18:47.636 "allow_any_host": true, 00:18:47.636 "hosts": [] 00:18:47.636 }, 00:18:47.636 { 00:18:47.636 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:47.636 "subtype": "NVMe", 00:18:47.636 "listen_addresses": [ 00:18:47.636 { 00:18:47.636 "trtype": "VFIOUSER", 00:18:47.636 "adrfam": "IPv4", 00:18:47.636 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:47.636 "trsvcid": "0" 00:18:47.636 } 00:18:47.636 ], 00:18:47.636 "allow_any_host": true, 00:18:47.636 "hosts": [], 00:18:47.636 "serial_number": "SPDK1", 00:18:47.636 "model_number": "SPDK bdev Controller", 00:18:47.636 "max_namespaces": 32, 00:18:47.636 "min_cntlid": 1, 00:18:47.636 "max_cntlid": 65519, 00:18:47.636 "namespaces": [ 00:18:47.636 { 00:18:47.636 "nsid": 1, 00:18:47.636 "bdev_name": "Malloc1", 00:18:47.636 "name": "Malloc1", 00:18:47.636 "nguid": "1D6E60B5D3A44BE4B15AC6094E0A88BB", 00:18:47.636 "uuid": "1d6e60b5-d3a4-4be4-b15a-c6094e0a88bb" 00:18:47.636 } 00:18:47.636 ] 00:18:47.636 }, 00:18:47.636 { 00:18:47.636 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:47.636 "subtype": "NVMe", 00:18:47.636 "listen_addresses": [ 00:18:47.636 { 00:18:47.636 "trtype": "VFIOUSER", 00:18:47.636 "adrfam": "IPv4", 00:18:47.636 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:47.636 "trsvcid": "0" 00:18:47.636 } 00:18:47.636 ], 00:18:47.636 "allow_any_host": true, 00:18:47.636 "hosts": [], 00:18:47.636 "serial_number": "SPDK2", 00:18:47.636 "model_number": "SPDK bdev Controller", 00:18:47.636 "max_namespaces": 32, 00:18:47.636 "min_cntlid": 1, 00:18:47.636 "max_cntlid": 65519, 00:18:47.636 "namespaces": [ 00:18:47.636 { 00:18:47.636 "nsid": 1, 00:18:47.636 "bdev_name": "Malloc2", 00:18:47.636 "name": "Malloc2", 00:18:47.636 "nguid": "C5E4D067D727442681FC75E036C267B2", 00:18:47.636 "uuid": "c5e4d067-d727-4426-81fc-75e036c267b2" 00:18:47.636 } 00:18:47.636 ] 00:18:47.636 } 00:18:47.636 ] 00:18:47.636 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:47.636 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:47.636 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=969107 00:18:47.636 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:47.636 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:47.636 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:47.636 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:47.636 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:47.636 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:47.636 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:47.636 [2024-12-16 02:40:18.266313] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:47.894 Malloc3 00:18:47.894 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:47.894 [2024-12-16 02:40:18.522341] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:47.895 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:48.153 Asynchronous Event Request test 00:18:48.153 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:48.153 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:48.153 Registering asynchronous event callbacks... 00:18:48.153 Starting namespace attribute notice tests for all controllers... 00:18:48.153 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:48.153 aer_cb - Changed Namespace 00:18:48.153 Cleaning up... 00:18:48.153 [ 00:18:48.153 { 00:18:48.153 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:48.153 "subtype": "Discovery", 00:18:48.153 "listen_addresses": [], 00:18:48.153 "allow_any_host": true, 00:18:48.153 "hosts": [] 00:18:48.153 }, 00:18:48.153 { 00:18:48.153 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:48.153 "subtype": "NVMe", 00:18:48.153 "listen_addresses": [ 00:18:48.153 { 00:18:48.153 "trtype": "VFIOUSER", 00:18:48.153 "adrfam": "IPv4", 00:18:48.153 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:48.153 "trsvcid": "0" 00:18:48.153 } 00:18:48.153 ], 00:18:48.153 "allow_any_host": true, 00:18:48.153 "hosts": [], 00:18:48.153 "serial_number": "SPDK1", 00:18:48.153 "model_number": "SPDK bdev Controller", 00:18:48.153 "max_namespaces": 32, 00:18:48.153 "min_cntlid": 1, 00:18:48.153 "max_cntlid": 65519, 00:18:48.153 "namespaces": [ 00:18:48.153 { 00:18:48.153 "nsid": 1, 00:18:48.153 "bdev_name": "Malloc1", 00:18:48.153 "name": "Malloc1", 00:18:48.153 "nguid": "1D6E60B5D3A44BE4B15AC6094E0A88BB", 00:18:48.153 "uuid": "1d6e60b5-d3a4-4be4-b15a-c6094e0a88bb" 00:18:48.153 }, 00:18:48.153 { 00:18:48.153 "nsid": 2, 00:18:48.153 "bdev_name": "Malloc3", 00:18:48.153 "name": "Malloc3", 00:18:48.153 "nguid": "6019F65507FB48B6986C93B9F08879A7", 00:18:48.153 "uuid": "6019f655-07fb-48b6-986c-93b9f08879a7" 00:18:48.153 } 00:18:48.153 ] 00:18:48.153 }, 00:18:48.153 { 00:18:48.153 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:48.153 "subtype": "NVMe", 00:18:48.153 "listen_addresses": [ 00:18:48.153 { 00:18:48.153 "trtype": "VFIOUSER", 00:18:48.153 "adrfam": "IPv4", 00:18:48.153 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:48.153 "trsvcid": "0" 00:18:48.153 } 00:18:48.153 ], 00:18:48.153 "allow_any_host": true, 00:18:48.153 "hosts": [], 00:18:48.153 "serial_number": "SPDK2", 00:18:48.153 "model_number": "SPDK bdev Controller", 00:18:48.153 "max_namespaces": 32, 00:18:48.153 "min_cntlid": 1, 00:18:48.153 "max_cntlid": 65519, 00:18:48.153 "namespaces": [ 00:18:48.153 { 00:18:48.153 "nsid": 1, 00:18:48.153 "bdev_name": "Malloc2", 00:18:48.153 "name": "Malloc2", 00:18:48.154 "nguid": "C5E4D067D727442681FC75E036C267B2", 00:18:48.154 "uuid": "c5e4d067-d727-4426-81fc-75e036c267b2" 00:18:48.154 } 00:18:48.154 ] 00:18:48.154 } 00:18:48.154 ] 00:18:48.154 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 969107 00:18:48.154 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:48.154 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:48.154 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:48.154 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:48.154 [2024-12-16 02:40:18.765307] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:48.154 [2024-12-16 02:40:18.765341] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid969329 ] 00:18:48.154 [2024-12-16 02:40:18.806201] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:48.414 [2024-12-16 02:40:18.813083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:48.414 [2024-12-16 02:40:18.813102] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb42033a000 00:18:48.414 [2024-12-16 02:40:18.814082] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:48.414 [2024-12-16 02:40:18.815096] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:48.414 [2024-12-16 02:40:18.816101] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:48.414 [2024-12-16 02:40:18.817100] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:48.414 [2024-12-16 02:40:18.818106] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:48.414 [2024-12-16 02:40:18.819111] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:48.414 [2024-12-16 02:40:18.820116] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:48.414 [2024-12-16 02:40:18.821126] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:48.414 [2024-12-16 02:40:18.822136] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:48.414 [2024-12-16 02:40:18.822147] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb41f044000 00:18:48.414 [2024-12-16 02:40:18.823063] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:48.414 [2024-12-16 02:40:18.835111] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:48.414 [2024-12-16 02:40:18.835137] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:48.414 [2024-12-16 02:40:18.840218] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:48.414 [2024-12-16 02:40:18.840251] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:48.414 [2024-12-16 02:40:18.840320] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:48.414 [2024-12-16 02:40:18.840335] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:48.414 [2024-12-16 02:40:18.840340] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:48.414 [2024-12-16 02:40:18.841220] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:48.414 [2024-12-16 02:40:18.841230] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:48.414 [2024-12-16 02:40:18.841236] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:48.414 [2024-12-16 02:40:18.842232] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:48.414 [2024-12-16 02:40:18.842240] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:48.414 [2024-12-16 02:40:18.842247] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:48.414 [2024-12-16 02:40:18.843237] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:48.414 [2024-12-16 02:40:18.843246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:48.414 [2024-12-16 02:40:18.844238] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:48.414 [2024-12-16 02:40:18.844246] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:48.414 [2024-12-16 02:40:18.844250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:48.414 [2024-12-16 02:40:18.844256] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:48.414 [2024-12-16 02:40:18.844363] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:48.414 [2024-12-16 02:40:18.844367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:48.414 [2024-12-16 02:40:18.844372] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:48.414 [2024-12-16 02:40:18.845243] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:48.414 [2024-12-16 02:40:18.846257] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:48.414 [2024-12-16 02:40:18.847269] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:48.414 [2024-12-16 02:40:18.848276] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:48.414 [2024-12-16 02:40:18.848312] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:48.414 [2024-12-16 02:40:18.849288] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:48.414 [2024-12-16 02:40:18.849297] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:48.415 [2024-12-16 02:40:18.849303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.849320] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:48.415 [2024-12-16 02:40:18.849329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.849339] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:48.415 [2024-12-16 02:40:18.849344] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:48.415 [2024-12-16 02:40:18.849347] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:48.415 [2024-12-16 02:40:18.849356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:48.415 [2024-12-16 02:40:18.856854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:48.415 [2024-12-16 02:40:18.856864] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:48.415 [2024-12-16 02:40:18.856868] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:48.415 [2024-12-16 02:40:18.856872] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:48.415 [2024-12-16 02:40:18.856876] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:48.415 [2024-12-16 02:40:18.856881] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:48.415 [2024-12-16 02:40:18.856885] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:48.415 [2024-12-16 02:40:18.856889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.856898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.856908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:48.415 [2024-12-16 02:40:18.864853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:48.415 [2024-12-16 02:40:18.864864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.415 [2024-12-16 02:40:18.864872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.415 [2024-12-16 02:40:18.864879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.415 [2024-12-16 02:40:18.864886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:48.415 [2024-12-16 02:40:18.864890] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.864898] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.864906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:48.415 [2024-12-16 02:40:18.872851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:48.415 [2024-12-16 02:40:18.872859] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:48.415 [2024-12-16 02:40:18.872863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.872869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.872874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.872882] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:48.415 [2024-12-16 02:40:18.880851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:48.415 [2024-12-16 02:40:18.880901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.880911] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.880917] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:48.415 [2024-12-16 02:40:18.880922] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:48.415 [2024-12-16 02:40:18.880925] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:48.415 [2024-12-16 02:40:18.880930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:48.415 [2024-12-16 02:40:18.888852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:48.415 [2024-12-16 02:40:18.888862] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:48.415 [2024-12-16 02:40:18.888871] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.888878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.888884] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:48.415 [2024-12-16 02:40:18.888888] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:48.415 [2024-12-16 02:40:18.888891] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:48.415 [2024-12-16 02:40:18.888896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:48.415 [2024-12-16 02:40:18.896854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:48.415 [2024-12-16 02:40:18.896866] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.896873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.896879] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:48.415 [2024-12-16 02:40:18.896886] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:48.415 [2024-12-16 02:40:18.896889] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:48.415 [2024-12-16 02:40:18.896895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:48.415 [2024-12-16 02:40:18.904854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:48.415 [2024-12-16 02:40:18.904863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.904869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.904877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.904882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.904886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.904891] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.904895] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:48.415 [2024-12-16 02:40:18.904899] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:48.415 [2024-12-16 02:40:18.904904] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:48.415 [2024-12-16 02:40:18.904918] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:48.415 [2024-12-16 02:40:18.912853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:48.415 [2024-12-16 02:40:18.912865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:48.415 [2024-12-16 02:40:18.920853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:48.415 [2024-12-16 02:40:18.920864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:48.415 [2024-12-16 02:40:18.928851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:48.415 [2024-12-16 02:40:18.928863] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:48.415 [2024-12-16 02:40:18.936851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:48.415 [2024-12-16 02:40:18.936866] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:48.415 [2024-12-16 02:40:18.936870] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:48.415 [2024-12-16 02:40:18.936873] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:48.415 [2024-12-16 02:40:18.936876] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:48.415 [2024-12-16 02:40:18.936879] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:48.415 [2024-12-16 02:40:18.936885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:48.415 [2024-12-16 02:40:18.936893] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:48.415 [2024-12-16 02:40:18.936897] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:48.415 [2024-12-16 02:40:18.936900] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:48.415 [2024-12-16 02:40:18.936906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:48.415 [2024-12-16 02:40:18.936912] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:48.415 [2024-12-16 02:40:18.936915] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:48.416 [2024-12-16 02:40:18.936918] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:48.416 [2024-12-16 02:40:18.936924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:48.416 [2024-12-16 02:40:18.936930] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:48.416 [2024-12-16 02:40:18.936934] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:48.416 [2024-12-16 02:40:18.936937] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:48.416 [2024-12-16 02:40:18.936942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:48.416 [2024-12-16 02:40:18.944853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:48.416 [2024-12-16 02:40:18.944866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:48.416 [2024-12-16 02:40:18.944875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:48.416 [2024-12-16 02:40:18.944881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:48.416 ===================================================== 00:18:48.416 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:48.416 ===================================================== 00:18:48.416 Controller Capabilities/Features 00:18:48.416 ================================ 00:18:48.416 Vendor ID: 4e58 00:18:48.416 Subsystem Vendor ID: 4e58 00:18:48.416 Serial Number: SPDK2 00:18:48.416 Model Number: SPDK bdev Controller 00:18:48.416 Firmware Version: 25.01 00:18:48.416 Recommended Arb Burst: 6 00:18:48.416 IEEE OUI Identifier: 8d 6b 50 00:18:48.416 Multi-path I/O 00:18:48.416 May have multiple subsystem ports: Yes 00:18:48.416 May have multiple controllers: Yes 00:18:48.416 Associated with SR-IOV VF: No 00:18:48.416 Max Data Transfer Size: 131072 00:18:48.416 Max Number of Namespaces: 32 00:18:48.416 Max Number of I/O Queues: 127 00:18:48.416 NVMe Specification Version (VS): 1.3 00:18:48.416 NVMe Specification Version (Identify): 1.3 00:18:48.416 Maximum Queue Entries: 256 00:18:48.416 Contiguous Queues Required: Yes 00:18:48.416 Arbitration Mechanisms Supported 00:18:48.416 Weighted Round Robin: Not Supported 00:18:48.416 Vendor Specific: Not Supported 00:18:48.416 Reset Timeout: 15000 ms 00:18:48.416 Doorbell Stride: 4 bytes 00:18:48.416 NVM Subsystem Reset: Not Supported 00:18:48.416 Command Sets Supported 00:18:48.416 NVM Command Set: Supported 00:18:48.416 Boot Partition: Not Supported 00:18:48.416 Memory Page Size Minimum: 4096 bytes 00:18:48.416 Memory Page Size Maximum: 4096 bytes 00:18:48.416 Persistent Memory Region: Not Supported 00:18:48.416 Optional Asynchronous Events Supported 00:18:48.416 Namespace Attribute Notices: Supported 00:18:48.416 Firmware Activation Notices: Not Supported 00:18:48.416 ANA Change Notices: Not Supported 00:18:48.416 PLE Aggregate Log Change Notices: Not Supported 00:18:48.416 LBA Status Info Alert Notices: Not Supported 00:18:48.416 EGE Aggregate Log Change Notices: Not Supported 00:18:48.416 Normal NVM Subsystem Shutdown event: Not Supported 00:18:48.416 Zone Descriptor Change Notices: Not Supported 00:18:48.416 Discovery Log Change Notices: Not Supported 00:18:48.416 Controller Attributes 00:18:48.416 128-bit Host Identifier: Supported 00:18:48.416 Non-Operational Permissive Mode: Not Supported 00:18:48.416 NVM Sets: Not Supported 00:18:48.416 Read Recovery Levels: Not Supported 00:18:48.416 Endurance Groups: Not Supported 00:18:48.416 Predictable Latency Mode: Not Supported 00:18:48.416 Traffic Based Keep ALive: Not Supported 00:18:48.416 Namespace Granularity: Not Supported 00:18:48.416 SQ Associations: Not Supported 00:18:48.416 UUID List: Not Supported 00:18:48.416 Multi-Domain Subsystem: Not Supported 00:18:48.416 Fixed Capacity Management: Not Supported 00:18:48.416 Variable Capacity Management: Not Supported 00:18:48.416 Delete Endurance Group: Not Supported 00:18:48.416 Delete NVM Set: Not Supported 00:18:48.416 Extended LBA Formats Supported: Not Supported 00:18:48.416 Flexible Data Placement Supported: Not Supported 00:18:48.416 00:18:48.416 Controller Memory Buffer Support 00:18:48.416 ================================ 00:18:48.416 Supported: No 00:18:48.416 00:18:48.416 Persistent Memory Region Support 00:18:48.416 ================================ 00:18:48.416 Supported: No 00:18:48.416 00:18:48.416 Admin Command Set Attributes 00:18:48.416 ============================ 00:18:48.416 Security Send/Receive: Not Supported 00:18:48.416 Format NVM: Not Supported 00:18:48.416 Firmware Activate/Download: Not Supported 00:18:48.416 Namespace Management: Not Supported 00:18:48.416 Device Self-Test: Not Supported 00:18:48.416 Directives: Not Supported 00:18:48.416 NVMe-MI: Not Supported 00:18:48.416 Virtualization Management: Not Supported 00:18:48.416 Doorbell Buffer Config: Not Supported 00:18:48.416 Get LBA Status Capability: Not Supported 00:18:48.416 Command & Feature Lockdown Capability: Not Supported 00:18:48.416 Abort Command Limit: 4 00:18:48.416 Async Event Request Limit: 4 00:18:48.416 Number of Firmware Slots: N/A 00:18:48.416 Firmware Slot 1 Read-Only: N/A 00:18:48.416 Firmware Activation Without Reset: N/A 00:18:48.416 Multiple Update Detection Support: N/A 00:18:48.416 Firmware Update Granularity: No Information Provided 00:18:48.416 Per-Namespace SMART Log: No 00:18:48.416 Asymmetric Namespace Access Log Page: Not Supported 00:18:48.416 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:48.416 Command Effects Log Page: Supported 00:18:48.416 Get Log Page Extended Data: Supported 00:18:48.416 Telemetry Log Pages: Not Supported 00:18:48.416 Persistent Event Log Pages: Not Supported 00:18:48.416 Supported Log Pages Log Page: May Support 00:18:48.416 Commands Supported & Effects Log Page: Not Supported 00:18:48.416 Feature Identifiers & Effects Log Page:May Support 00:18:48.416 NVMe-MI Commands & Effects Log Page: May Support 00:18:48.416 Data Area 4 for Telemetry Log: Not Supported 00:18:48.416 Error Log Page Entries Supported: 128 00:18:48.416 Keep Alive: Supported 00:18:48.416 Keep Alive Granularity: 10000 ms 00:18:48.416 00:18:48.416 NVM Command Set Attributes 00:18:48.416 ========================== 00:18:48.416 Submission Queue Entry Size 00:18:48.416 Max: 64 00:18:48.416 Min: 64 00:18:48.416 Completion Queue Entry Size 00:18:48.416 Max: 16 00:18:48.416 Min: 16 00:18:48.416 Number of Namespaces: 32 00:18:48.416 Compare Command: Supported 00:18:48.416 Write Uncorrectable Command: Not Supported 00:18:48.416 Dataset Management Command: Supported 00:18:48.416 Write Zeroes Command: Supported 00:18:48.416 Set Features Save Field: Not Supported 00:18:48.416 Reservations: Not Supported 00:18:48.416 Timestamp: Not Supported 00:18:48.416 Copy: Supported 00:18:48.416 Volatile Write Cache: Present 00:18:48.416 Atomic Write Unit (Normal): 1 00:18:48.416 Atomic Write Unit (PFail): 1 00:18:48.416 Atomic Compare & Write Unit: 1 00:18:48.416 Fused Compare & Write: Supported 00:18:48.416 Scatter-Gather List 00:18:48.416 SGL Command Set: Supported (Dword aligned) 00:18:48.416 SGL Keyed: Not Supported 00:18:48.416 SGL Bit Bucket Descriptor: Not Supported 00:18:48.416 SGL Metadata Pointer: Not Supported 00:18:48.416 Oversized SGL: Not Supported 00:18:48.416 SGL Metadata Address: Not Supported 00:18:48.416 SGL Offset: Not Supported 00:18:48.416 Transport SGL Data Block: Not Supported 00:18:48.416 Replay Protected Memory Block: Not Supported 00:18:48.416 00:18:48.416 Firmware Slot Information 00:18:48.416 ========================= 00:18:48.416 Active slot: 1 00:18:48.416 Slot 1 Firmware Revision: 25.01 00:18:48.416 00:18:48.416 00:18:48.416 Commands Supported and Effects 00:18:48.416 ============================== 00:18:48.416 Admin Commands 00:18:48.416 -------------- 00:18:48.416 Get Log Page (02h): Supported 00:18:48.416 Identify (06h): Supported 00:18:48.416 Abort (08h): Supported 00:18:48.416 Set Features (09h): Supported 00:18:48.416 Get Features (0Ah): Supported 00:18:48.416 Asynchronous Event Request (0Ch): Supported 00:18:48.416 Keep Alive (18h): Supported 00:18:48.416 I/O Commands 00:18:48.416 ------------ 00:18:48.416 Flush (00h): Supported LBA-Change 00:18:48.416 Write (01h): Supported LBA-Change 00:18:48.416 Read (02h): Supported 00:18:48.416 Compare (05h): Supported 00:18:48.416 Write Zeroes (08h): Supported LBA-Change 00:18:48.416 Dataset Management (09h): Supported LBA-Change 00:18:48.416 Copy (19h): Supported LBA-Change 00:18:48.416 00:18:48.416 Error Log 00:18:48.416 ========= 00:18:48.416 00:18:48.416 Arbitration 00:18:48.416 =========== 00:18:48.416 Arbitration Burst: 1 00:18:48.416 00:18:48.416 Power Management 00:18:48.416 ================ 00:18:48.416 Number of Power States: 1 00:18:48.416 Current Power State: Power State #0 00:18:48.416 Power State #0: 00:18:48.416 Max Power: 0.00 W 00:18:48.416 Non-Operational State: Operational 00:18:48.416 Entry Latency: Not Reported 00:18:48.416 Exit Latency: Not Reported 00:18:48.416 Relative Read Throughput: 0 00:18:48.416 Relative Read Latency: 0 00:18:48.416 Relative Write Throughput: 0 00:18:48.416 Relative Write Latency: 0 00:18:48.417 Idle Power: Not Reported 00:18:48.417 Active Power: Not Reported 00:18:48.417 Non-Operational Permissive Mode: Not Supported 00:18:48.417 00:18:48.417 Health Information 00:18:48.417 ================== 00:18:48.417 Critical Warnings: 00:18:48.417 Available Spare Space: OK 00:18:48.417 Temperature: OK 00:18:48.417 Device Reliability: OK 00:18:48.417 Read Only: No 00:18:48.417 Volatile Memory Backup: OK 00:18:48.417 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:48.417 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:48.417 Available Spare: 0% 00:18:48.417 Available Sp[2024-12-16 02:40:18.944965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:48.417 [2024-12-16 02:40:18.952852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:48.417 [2024-12-16 02:40:18.952879] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:48.417 [2024-12-16 02:40:18.952887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.417 [2024-12-16 02:40:18.952893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.417 [2024-12-16 02:40:18.952899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.417 [2024-12-16 02:40:18.952904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:48.417 [2024-12-16 02:40:18.952942] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:48.417 [2024-12-16 02:40:18.952951] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:48.417 [2024-12-16 02:40:18.953945] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:48.417 [2024-12-16 02:40:18.953987] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:48.417 [2024-12-16 02:40:18.953993] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:48.417 [2024-12-16 02:40:18.954955] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:48.417 [2024-12-16 02:40:18.954966] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:48.417 [2024-12-16 02:40:18.955012] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:48.417 [2024-12-16 02:40:18.955969] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:48.417 are Threshold: 0% 00:18:48.417 Life Percentage Used: 0% 00:18:48.417 Data Units Read: 0 00:18:48.417 Data Units Written: 0 00:18:48.417 Host Read Commands: 0 00:18:48.417 Host Write Commands: 0 00:18:48.417 Controller Busy Time: 0 minutes 00:18:48.417 Power Cycles: 0 00:18:48.417 Power On Hours: 0 hours 00:18:48.417 Unsafe Shutdowns: 0 00:18:48.417 Unrecoverable Media Errors: 0 00:18:48.417 Lifetime Error Log Entries: 0 00:18:48.417 Warning Temperature Time: 0 minutes 00:18:48.417 Critical Temperature Time: 0 minutes 00:18:48.417 00:18:48.417 Number of Queues 00:18:48.417 ================ 00:18:48.417 Number of I/O Submission Queues: 127 00:18:48.417 Number of I/O Completion Queues: 127 00:18:48.417 00:18:48.417 Active Namespaces 00:18:48.417 ================= 00:18:48.417 Namespace ID:1 00:18:48.417 Error Recovery Timeout: Unlimited 00:18:48.417 Command Set Identifier: NVM (00h) 00:18:48.417 Deallocate: Supported 00:18:48.417 Deallocated/Unwritten Error: Not Supported 00:18:48.417 Deallocated Read Value: Unknown 00:18:48.417 Deallocate in Write Zeroes: Not Supported 00:18:48.417 Deallocated Guard Field: 0xFFFF 00:18:48.417 Flush: Supported 00:18:48.417 Reservation: Supported 00:18:48.417 Namespace Sharing Capabilities: Multiple Controllers 00:18:48.417 Size (in LBAs): 131072 (0GiB) 00:18:48.417 Capacity (in LBAs): 131072 (0GiB) 00:18:48.417 Utilization (in LBAs): 131072 (0GiB) 00:18:48.417 NGUID: C5E4D067D727442681FC75E036C267B2 00:18:48.417 UUID: c5e4d067-d727-4426-81fc-75e036c267b2 00:18:48.417 Thin Provisioning: Not Supported 00:18:48.417 Per-NS Atomic Units: Yes 00:18:48.417 Atomic Boundary Size (Normal): 0 00:18:48.417 Atomic Boundary Size (PFail): 0 00:18:48.417 Atomic Boundary Offset: 0 00:18:48.417 Maximum Single Source Range Length: 65535 00:18:48.417 Maximum Copy Length: 65535 00:18:48.417 Maximum Source Range Count: 1 00:18:48.417 NGUID/EUI64 Never Reused: No 00:18:48.417 Namespace Write Protected: No 00:18:48.417 Number of LBA Formats: 1 00:18:48.417 Current LBA Format: LBA Format #00 00:18:48.417 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:48.417 00:18:48.417 02:40:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:48.675 [2024-12-16 02:40:19.173205] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:53.944 Initializing NVMe Controllers 00:18:53.944 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:53.944 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:53.944 Initialization complete. Launching workers. 00:18:53.944 ======================================================== 00:18:53.944 Latency(us) 00:18:53.944 Device Information : IOPS MiB/s Average min max 00:18:53.944 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39920.80 155.94 3205.96 991.22 8566.36 00:18:53.944 ======================================================== 00:18:53.944 Total : 39920.80 155.94 3205.96 991.22 8566.36 00:18:53.944 00:18:53.944 [2024-12-16 02:40:24.281116] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:53.944 02:40:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:53.944 [2024-12-16 02:40:24.509774] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:59.215 Initializing NVMe Controllers 00:18:59.215 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:59.215 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:59.215 Initialization complete. Launching workers. 00:18:59.215 ======================================================== 00:18:59.215 Latency(us) 00:18:59.215 Device Information : IOPS MiB/s Average min max 00:18:59.215 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39902.12 155.87 3207.70 985.93 7591.12 00:18:59.215 ======================================================== 00:18:59.215 Total : 39902.12 155.87 3207.70 985.93 7591.12 00:18:59.215 00:18:59.215 [2024-12-16 02:40:29.530701] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:59.215 02:40:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:59.215 [2024-12-16 02:40:29.732942] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:04.485 [2024-12-16 02:40:34.876955] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:04.485 Initializing NVMe Controllers 00:19:04.485 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:04.485 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:04.485 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:04.485 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:04.485 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:04.485 Initialization complete. Launching workers. 00:19:04.485 Starting thread on core 2 00:19:04.485 Starting thread on core 3 00:19:04.485 Starting thread on core 1 00:19:04.485 02:40:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:04.743 [2024-12-16 02:40:35.172307] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:08.933 [2024-12-16 02:40:39.016056] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:08.933 Initializing NVMe Controllers 00:19:08.933 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:08.933 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:08.933 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:08.933 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:08.933 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:08.933 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:08.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:08.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:08.933 Initialization complete. Launching workers. 00:19:08.933 Starting thread on core 1 with urgent priority queue 00:19:08.933 Starting thread on core 2 with urgent priority queue 00:19:08.933 Starting thread on core 3 with urgent priority queue 00:19:08.933 Starting thread on core 0 with urgent priority queue 00:19:08.933 SPDK bdev Controller (SPDK2 ) core 0: 7458.00 IO/s 13.41 secs/100000 ios 00:19:08.933 SPDK bdev Controller (SPDK2 ) core 1: 8116.00 IO/s 12.32 secs/100000 ios 00:19:08.933 SPDK bdev Controller (SPDK2 ) core 2: 7277.67 IO/s 13.74 secs/100000 ios 00:19:08.933 SPDK bdev Controller (SPDK2 ) core 3: 6869.67 IO/s 14.56 secs/100000 ios 00:19:08.933 ======================================================== 00:19:08.933 00:19:08.933 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:08.933 [2024-12-16 02:40:39.297322] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:08.933 Initializing NVMe Controllers 00:19:08.933 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:08.933 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:08.933 Namespace ID: 1 size: 0GB 00:19:08.933 Initialization complete. 00:19:08.933 INFO: using host memory buffer for IO 00:19:08.933 Hello world! 00:19:08.933 [2024-12-16 02:40:39.307400] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:08.933 02:40:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:08.933 [2024-12-16 02:40:39.588552] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:10.310 Initializing NVMe Controllers 00:19:10.310 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:10.310 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:10.310 Initialization complete. Launching workers. 00:19:10.310 submit (in ns) avg, min, max = 7643.9, 3193.3, 3999904.8 00:19:10.310 complete (in ns) avg, min, max = 21362.2, 1721.9, 7136021.0 00:19:10.310 00:19:10.310 Submit histogram 00:19:10.310 ================ 00:19:10.310 Range in us Cumulative Count 00:19:10.310 3.185 - 3.200: 0.0248% ( 4) 00:19:10.310 3.200 - 3.215: 0.1676% ( 23) 00:19:10.310 3.215 - 3.230: 0.5152% ( 56) 00:19:10.310 3.230 - 3.246: 1.2849% ( 124) 00:19:10.310 3.246 - 3.261: 3.5382% ( 363) 00:19:10.310 3.261 - 3.276: 8.7709% ( 843) 00:19:10.310 3.276 - 3.291: 14.5438% ( 930) 00:19:10.310 3.291 - 3.307: 21.1980% ( 1072) 00:19:10.310 3.307 - 3.322: 28.7585% ( 1218) 00:19:10.310 3.322 - 3.337: 35.7169% ( 1121) 00:19:10.310 3.337 - 3.352: 41.2104% ( 885) 00:19:10.310 3.352 - 3.368: 45.4066% ( 676) 00:19:10.310 3.368 - 3.383: 49.8200% ( 711) 00:19:10.310 3.383 - 3.398: 53.9851% ( 671) 00:19:10.310 3.398 - 3.413: 58.7772% ( 772) 00:19:10.310 3.413 - 3.429: 65.6859% ( 1113) 00:19:10.310 3.429 - 3.444: 71.1794% ( 885) 00:19:10.310 3.444 - 3.459: 76.3998% ( 841) 00:19:10.310 3.459 - 3.474: 81.0118% ( 743) 00:19:10.310 3.474 - 3.490: 83.7927% ( 448) 00:19:10.310 3.490 - 3.505: 85.4811% ( 272) 00:19:10.310 3.505 - 3.520: 86.3315% ( 137) 00:19:10.310 3.520 - 3.535: 86.8839% ( 89) 00:19:10.310 3.535 - 3.550: 87.3309% ( 72) 00:19:10.310 3.550 - 3.566: 87.9640% ( 102) 00:19:10.310 3.566 - 3.581: 88.8020% ( 135) 00:19:10.310 3.581 - 3.596: 89.8945% ( 176) 00:19:10.310 3.596 - 3.611: 90.9001% ( 162) 00:19:10.310 3.611 - 3.627: 91.8808% ( 158) 00:19:10.310 3.627 - 3.642: 92.7250% ( 136) 00:19:10.310 3.642 - 3.657: 93.4140% ( 111) 00:19:10.310 3.657 - 3.672: 94.2831% ( 140) 00:19:10.310 3.672 - 3.688: 95.2204% ( 151) 00:19:10.310 3.688 - 3.703: 95.9901% ( 124) 00:19:10.310 3.703 - 3.718: 96.6170% ( 101) 00:19:10.310 3.718 - 3.733: 97.2005% ( 94) 00:19:10.310 3.733 - 3.749: 97.6164% ( 67) 00:19:10.310 3.749 - 3.764: 97.8957% ( 45) 00:19:10.310 3.764 - 3.779: 98.1813% ( 46) 00:19:10.310 3.779 - 3.794: 98.3364% ( 25) 00:19:10.310 3.794 - 3.810: 98.4606% ( 20) 00:19:10.310 3.810 - 3.825: 98.5475% ( 14) 00:19:10.310 3.825 - 3.840: 98.6220% ( 12) 00:19:10.310 3.840 - 3.855: 98.6530% ( 5) 00:19:10.310 3.855 - 3.870: 98.6716% ( 3) 00:19:10.310 3.870 - 3.886: 98.7089% ( 6) 00:19:10.310 3.886 - 3.901: 98.7399% ( 5) 00:19:10.310 3.901 - 3.931: 98.7958% ( 9) 00:19:10.310 3.931 - 3.962: 98.8454% ( 8) 00:19:10.310 3.962 - 3.992: 98.9758% ( 21) 00:19:10.310 3.992 - 4.023: 99.0565% ( 13) 00:19:10.310 4.023 - 4.053: 99.1186% ( 10) 00:19:10.310 4.053 - 4.084: 99.1558% ( 6) 00:19:10.310 4.084 - 4.114: 99.1868% ( 5) 00:19:10.310 4.114 - 4.145: 99.2241% ( 6) 00:19:10.310 4.145 - 4.175: 99.2365% ( 2) 00:19:10.310 4.175 - 4.206: 99.2613% ( 4) 00:19:10.310 4.206 - 4.236: 99.2862% ( 4) 00:19:10.310 4.236 - 4.267: 99.3048% ( 3) 00:19:10.310 4.267 - 4.297: 99.3172% ( 2) 00:19:10.310 4.328 - 4.358: 99.3296% ( 2) 00:19:10.310 4.358 - 4.389: 99.3420% ( 2) 00:19:10.310 4.389 - 4.419: 99.3482% ( 1) 00:19:10.310 4.419 - 4.450: 99.3544% ( 1) 00:19:10.310 4.480 - 4.510: 99.3606% ( 1) 00:19:10.310 4.510 - 4.541: 99.3669% ( 1) 00:19:10.310 4.602 - 4.632: 99.3731% ( 1) 00:19:10.310 4.693 - 4.724: 99.3793% ( 1) 00:19:10.310 4.754 - 4.785: 99.3855% ( 1) 00:19:10.310 4.785 - 4.815: 99.3917% ( 1) 00:19:10.310 4.815 - 4.846: 99.4041% ( 2) 00:19:10.310 4.968 - 4.998: 99.4103% ( 1) 00:19:10.310 4.998 - 5.029: 99.4165% ( 1) 00:19:10.310 5.029 - 5.059: 99.4227% ( 1) 00:19:10.310 5.120 - 5.150: 99.4289% ( 1) 00:19:10.310 5.242 - 5.272: 99.4413% ( 2) 00:19:10.310 5.272 - 5.303: 99.4475% ( 1) 00:19:10.310 5.303 - 5.333: 99.4538% ( 1) 00:19:10.310 5.486 - 5.516: 99.4600% ( 1) 00:19:10.310 5.608 - 5.638: 99.4724% ( 2) 00:19:10.310 5.638 - 5.669: 99.4786% ( 1) 00:19:10.310 5.760 - 5.790: 99.4848% ( 1) 00:19:10.310 5.882 - 5.912: 99.4972% ( 2) 00:19:10.310 5.912 - 5.943: 99.5034% ( 1) 00:19:10.310 5.943 - 5.973: 99.5096% ( 1) 00:19:10.310 6.034 - 6.065: 99.5220% ( 2) 00:19:10.310 6.065 - 6.095: 99.5345% ( 2) 00:19:10.310 6.095 - 6.126: 99.5531% ( 3) 00:19:10.310 6.187 - 6.217: 99.5655% ( 2) 00:19:10.310 6.309 - 6.339: 99.5779% ( 2) 00:19:10.310 6.339 - 6.370: 99.5903% ( 2) 00:19:10.310 6.491 - 6.522: 99.5965% ( 1) 00:19:10.310 6.522 - 6.552: 99.6027% ( 1) 00:19:10.310 6.552 - 6.583: 99.6089% ( 1) 00:19:10.310 6.583 - 6.613: 99.6214% ( 2) 00:19:10.310 6.644 - 6.674: 99.6276% ( 1) 00:19:10.310 6.674 - 6.705: 99.6338% ( 1) 00:19:10.310 6.735 - 6.766: 99.6400% ( 1) 00:19:10.310 6.766 - 6.796: 99.6462% ( 1) 00:19:10.310 7.040 - 7.070: 99.6524% ( 1) 00:19:10.310 7.101 - 7.131: 99.6710% ( 3) 00:19:10.310 7.162 - 7.192: 99.6772% ( 1) 00:19:10.310 7.223 - 7.253: 99.6834% ( 1) 00:19:10.310 7.345 - 7.375: 99.6896% ( 1) 00:19:10.310 7.375 - 7.406: 99.6958% ( 1) 00:19:10.310 7.436 - 7.467: 99.7020% ( 1) 00:19:10.310 7.467 - 7.497: 99.7145% ( 2) 00:19:10.310 7.497 - 7.528: 99.7207% ( 1) 00:19:10.310 7.558 - 7.589: 99.7269% ( 1) 00:19:10.310 7.619 - 7.650: 99.7331% ( 1) 00:19:10.311 7.650 - 7.680: 99.7393% ( 1) 00:19:10.311 7.710 - 7.741: 99.7455% ( 1) 00:19:10.311 7.863 - 7.924: 99.7579% ( 2) 00:19:10.311 8.168 - 8.229: 99.7641% ( 1) 00:19:10.311 8.411 - 8.472: 99.7765% ( 2) 00:19:10.311 8.838 - 8.899: 99.7827% ( 1) 00:19:10.311 8.960 - 9.021: 99.7890% ( 1) 00:19:10.311 9.448 - 9.509: 99.8014% ( 2) 00:19:10.311 10.179 - 10.240: 99.8076% ( 1) 00:19:10.311 10.423 - 10.484: 99.8138% ( 1) 00:19:10.311 10.667 - 10.728: 99.8200% ( 1) 00:19:10.311 11.459 - 11.520: 99.8262% ( 1) 00:19:10.311 11.825 - 11.886: 99.8324% ( 1) 00:19:10.311 12.434 - 12.495: 99.8386% ( 1) 00:19:10.311 12.678 - 12.739: 99.8448% ( 1) 00:19:10.311 12.800 - 12.861: 99.8510% ( 1) 00:19:10.311 14.080 - 14.141: 99.8572% ( 1) 00:19:10.311 14.324 - 14.385: 99.8634% ( 1) 00:19:10.311 14.690 - 14.750: 99.8696% ( 1) 00:19:10.311 14.811 - 14.872: 99.8759% ( 1) 00:19:10.311 15.848 - 15.970: 99.8821% ( 1) 00:19:10.311 17.920 - 18.042: 99.8883% ( 1) 00:19:10.311 19.870 - 19.992: 99.8945% ( 1) 00:19:10.311 3994.575 - 4025.783: 100.0000% ( 17) 00:19:10.311 00:19:10.311 Complete histogram 00:19:10.311 ================== 00:19:10.311 Range in us Cumulative Count 00:19:10.311 1.722 - 1.730: 0.0062% ( 1) 00:19:10.311 1.730 - 1.737: 0.0435% ( 6) 00:19:10.311 1.737 - 1.745: 0.0683% ( 4) 00:19:10.311 1.745 - 1.752: 0.0745% ( 1) 00:19:10.311 1.752 - 1.760: 0.0807% ( 1) 00:19:10.311 1.760 - 1.768: 0.6456% ( 91) 00:19:10.311 1.768 - 1.775: 7.3495% ( 1080) 00:19:10.311 1.775 - 1.783: 29.6027% ( 3585) 00:19:10.311 1.783 - 1.790: 54.6617% ( 4037) 00:19:10.311 1.790 - 1.798: 66.3377% ( 1881) 00:19:10.311 1.798 - 1.806: 70.2483% ( 630) 00:19:10.311 1.806 - 1.813: 72.3091% ( 332) 00:19:10.311 1.813 - 1.821: 74.4507% ( 345) 00:19:10.311 1.821 - 1.829: 79.4289% ( 802) 00:19:10.311 1.829 - 1.836: 86.6232% ( 1159) 00:19:10.311 1.836 - 1.844: 91.5332% ( 791) 00:19:10.311 1.844 - 1.851: 93.6996% ( 349) 00:19:10.311 1.851 - 1.859: 94.8479% ( 185) 00:19:10.311 1.859 - 1.867: 95.7356% ( 143) 00:19:10.311 1.867 - 1.874: 96.2756% ( 87) 00:19:10.311 1.874 - 1.882: 96.5611% ( 46) 00:19:10.311 1.882 - 1.890: 96.8032% ( 39) 00:19:10.311 1.890 - 1.897: 97.2191% ( 67) 00:19:10.311 1.897 - 1.905: 97.5047% ( 46) 00:19:10.311 1.905 - 1.912: 97.7592% ( 41) 00:19:10.311 1.912 - 1.920: 97.8771% ( 19) 00:19:10.311 1.920 - 1.928: 97.9888% ( 18) 00:19:10.311 1.928 - 1.935: 98.0261% ( 6) 00:19:10.311 1.935 - 1.943: 98.0819% ( 9) 00:19:10.311 1.943 - 1.950: 98.1564% ( 12) 00:19:10.311 1.950 - 1.966: 98.2309% ( 12) 00:19:10.311 1.966 - 1.981: 98.2557% ( 4) 00:19:10.311 1.981 - 1.996: 98.2744% ( 3) 00:19:10.311 1.996 - 2.011: 98.2806% ( 1) 00:19:10.311 2.011 - 2.027: 98.2868% ( 1) 00:19:10.311 2.027 - 2.042: 98.3054% ( 3) 00:19:10.311 2.042 - 2.057: 98.3116% ( 1) 00:19:10.311 2.057 - 2.072: 98.3178% ( 1) 00:19:10.311 2.088 - 2.103: 98.3302% ( 2) 00:19:10.311 2.149 - 2.164: 98.3861% ( 9) 00:19:10.311 2.164 - 2.179: 98.5227% ( 22) 00:19:10.311 2.179 - 2.194: 98.5971% ( 12) 00:19:10.311 2.194 - 2.210: 98.6282% ( 5) 00:19:10.311 2.210 - 2.225: 98.7027% ( 12) 00:19:10.311 2.225 - 2.240: 98.7523% ( 8) 00:19:10.311 2.240 - 2.255: 98.7958% ( 7) 00:19:10.311 2.255 - 2.270: 98.8268% ( 5) 00:19:10.311 2.270 - 2.286: 98.8579% ( 5) 00:19:10.311 2.286 - 2.301: 98.8827% ( 4) 00:19:10.311 2.301 - 2.316: 98.9013% ( 3) 00:19:10.311 2.316 - 2.331: 98.9137% ( 2) 00:19:10.311 2.331 - 2.347: 98.9261% ( 2) 00:19:10.311 2.347 - 2.362: 98.9323% ( 1) 00:19:10.311 2.362 - 2.377: 98.9385% ( 1) 00:19:10.311 2.377 - 2.392: 98.9572% ( 3) 00:19:10.311 2.392 - 2.408: 98.9758% ( 3) 00:19:10.311 2.423 - 2.438: 98.9820% ( 1) 00:19:10.311 2.438 - 2.453: 98.9882% ( 1) 00:19:10.311 2.453 - 2.469: 99.0006% ( 2) 00:19:10.311 2.514 - 2.530: 99.0068% ( 1) 00:19:10.311 2.545 - 2.560: 99.0192% ( 2) 00:19:10.311 2.590 - 2.606: 99.0255% ( 1) 00:19:10.311 2.621 - 2.636: 99.0379% ( 2) 00:19:10.311 2.682 - 2.697: 99.0441% ( 1) 00:19:10.311 2.712 - 2.728: 99.0503% ( 1) 00:19:10.311 2.743 - 2.758: 99.0565% ( 1) 00:19:10.311 2.758 - 2.773: 99.0689% ( 2) 00:19:10.311 2.789 - 2.804: 99.0751% ( 1) 00:19:10.311 2.819 - 2.834: 99.0813% ( 1) 00:19:10.311 3.154 - 3.170: 99.0875% ( 1) 00:19:10.311 3.276 - 3.291: 99.0937% ( 1) 00:19:10.311 3.352 - 3.368: 99.0999% ( 1) 00:19:10.311 3.444 - 3.459: 99.1124% ( 2) 00:19:10.311 3.505 - 3.520: 99.1186% ( 1) 00:19:10.311 3.627 - 3.642: 99.1248% ( 1) 00:19:10.311 3.794 - 3.810: 99.1310% ( 1) 00:19:10.311 3.901 - 3.931: 99.1372% ( 1) 00:19:10.311 4.084 - 4.114: 99.1434% ( 1) 00:19:10.311 4.114 - 4.145: 99.1496% ( 1) 00:19:10.311 4.145 - 4.175: 99.1558% ( 1) 00:19:10.311 4.175 - 4.206: 99.1682% ( 2) 00:19:10.311 4.267 - 4.297: 99.1744% ( 1) 00:19:10.311 4.358 - 4.389: 99.1930% ( 3) 00:19:10.311 4.389 - 4.419: 99.1993% ( 1) 00:19:10.311 4.480 - 4.510: 99.2055% ( 1) 00:19:10.311 4.541 - 4.571: 99.2117% ( 1) 00:19:10.311 4.602 - 4.632: 99.2179% ( 1) 00:19:10.311 4.663 - 4.693: 99.2303% ( 2) 00:19:10.311 4.724 - 4.754: 99.2365% ( 1) 00:19:10.311 4.815 - 4.846: 99.2427% ( 1) 00:19:10.311 4.937 - 4.968: 99.2489% ( 1) 00:19:10.311 4.998 - 5.029: 99.2551% ( 1) 00:19:10.311 5.059 - 5.090: 99.2675% ( 2) 00:19:10.311 5.150 - 5.181: 99.2737% ( 1) 00:19:10.311 5.394 - 5.425: 99.2800% ( 1) 00:19:10.311 5.516 - 5.547: 99.2862% ( 1) 00:19:10.311 5.547 - 5.577: 99.2986% ( 2) 00:19:10.311 5.577 - 5.608: 99.3048% ( 1) 00:19:10.311 5.638 - 5.669: 99.3110% ( 1) 00:19:10.311 5.699 - 5.730: 99.3172% ( 1) 00:19:10.311 5.821 - 5.851: 99.3234% ( 1) 00:19:10.311 5.943 - 5.973: 99.3358% ( 2) 00:19:10.311 6.004 - 6.034: 99.3420% ( 1) 00:19:10.311 6.034 - 6.065: 99.3482% ( 1) 00:19:10.311 6.126 - 6.156: 99.3606% ( 2) 00:19:10.311 6.278 - 6.309: 99.3669% ( 1) 00:19:10.311 6.309 - 6.339: 99.3731% ( 1) 00:19:10.311 6.370 - 6.400: 99.3793% ( 1) 00:19:10.311 6.491 - 6.522: 99.3855% ( 1) 00:19:10.311 6.522 - 6.552: 99.3917% ( 1) 00:19:10.311 6.766 - 6.796: 99.3979% ( 1) 00:19:10.311 6.827 - 6.857: 99.4041% ( 1) 00:19:10.311 7.040 - 7.070: 99.4103% ( 1) 00:19:10.311 7.192 - 7.223: 99.4165% ( 1) 00:19:10.311 7.436 - 7.467: 99.4227% ( 1) 00:19:10.311 7.741 - 7.771: 99.4289% ( 1) 00:19:10.311 9.509 - 9.570: 99.4351% ( 1) 00:19:10.311 10.179 - 10.240: 9[2024-12-16 02:40:40.682929] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:10.311 9.4413% ( 1) 00:19:10.311 10.301 - 10.362: 99.4475% ( 1) 00:19:10.311 10.606 - 10.667: 99.4538% ( 1) 00:19:10.311 11.947 - 12.008: 99.4600% ( 1) 00:19:10.311 12.617 - 12.678: 99.4724% ( 2) 00:19:10.311 14.202 - 14.263: 99.4786% ( 1) 00:19:10.311 14.446 - 14.507: 99.4910% ( 2) 00:19:10.311 17.067 - 17.189: 99.4972% ( 1) 00:19:10.311 18.773 - 18.895: 99.5034% ( 1) 00:19:10.311 19.017 - 19.139: 99.5096% ( 1) 00:19:10.311 38.766 - 39.010: 99.5158% ( 1) 00:19:10.311 3994.575 - 4025.783: 99.9938% ( 77) 00:19:10.311 7115.337 - 7146.545: 100.0000% ( 1) 00:19:10.311 00:19:10.311 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:10.311 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:10.311 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:10.311 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:10.311 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:10.311 [ 00:19:10.311 { 00:19:10.311 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:10.311 "subtype": "Discovery", 00:19:10.311 "listen_addresses": [], 00:19:10.311 "allow_any_host": true, 00:19:10.311 "hosts": [] 00:19:10.311 }, 00:19:10.311 { 00:19:10.311 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:10.311 "subtype": "NVMe", 00:19:10.311 "listen_addresses": [ 00:19:10.311 { 00:19:10.311 "trtype": "VFIOUSER", 00:19:10.311 "adrfam": "IPv4", 00:19:10.311 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:10.311 "trsvcid": "0" 00:19:10.311 } 00:19:10.311 ], 00:19:10.311 "allow_any_host": true, 00:19:10.311 "hosts": [], 00:19:10.311 "serial_number": "SPDK1", 00:19:10.311 "model_number": "SPDK bdev Controller", 00:19:10.311 "max_namespaces": 32, 00:19:10.311 "min_cntlid": 1, 00:19:10.311 "max_cntlid": 65519, 00:19:10.311 "namespaces": [ 00:19:10.311 { 00:19:10.311 "nsid": 1, 00:19:10.311 "bdev_name": "Malloc1", 00:19:10.311 "name": "Malloc1", 00:19:10.311 "nguid": "1D6E60B5D3A44BE4B15AC6094E0A88BB", 00:19:10.311 "uuid": "1d6e60b5-d3a4-4be4-b15a-c6094e0a88bb" 00:19:10.312 }, 00:19:10.312 { 00:19:10.312 "nsid": 2, 00:19:10.312 "bdev_name": "Malloc3", 00:19:10.312 "name": "Malloc3", 00:19:10.312 "nguid": "6019F65507FB48B6986C93B9F08879A7", 00:19:10.312 "uuid": "6019f655-07fb-48b6-986c-93b9f08879a7" 00:19:10.312 } 00:19:10.312 ] 00:19:10.312 }, 00:19:10.312 { 00:19:10.312 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:10.312 "subtype": "NVMe", 00:19:10.312 "listen_addresses": [ 00:19:10.312 { 00:19:10.312 "trtype": "VFIOUSER", 00:19:10.312 "adrfam": "IPv4", 00:19:10.312 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:10.312 "trsvcid": "0" 00:19:10.312 } 00:19:10.312 ], 00:19:10.312 "allow_any_host": true, 00:19:10.312 "hosts": [], 00:19:10.312 "serial_number": "SPDK2", 00:19:10.312 "model_number": "SPDK bdev Controller", 00:19:10.312 "max_namespaces": 32, 00:19:10.312 "min_cntlid": 1, 00:19:10.312 "max_cntlid": 65519, 00:19:10.312 "namespaces": [ 00:19:10.312 { 00:19:10.312 "nsid": 1, 00:19:10.312 "bdev_name": "Malloc2", 00:19:10.312 "name": "Malloc2", 00:19:10.312 "nguid": "C5E4D067D727442681FC75E036C267B2", 00:19:10.312 "uuid": "c5e4d067-d727-4426-81fc-75e036c267b2" 00:19:10.312 } 00:19:10.312 ] 00:19:10.312 } 00:19:10.312 ] 00:19:10.312 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:10.312 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=972898 00:19:10.312 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:10.312 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:10.312 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:10.312 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:10.312 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:10.312 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:10.312 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:10.312 02:40:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:10.571 [2024-12-16 02:40:41.082788] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:10.571 Malloc4 00:19:10.571 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:10.829 [2024-12-16 02:40:41.317521] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:10.829 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:10.829 Asynchronous Event Request test 00:19:10.829 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:10.829 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:10.829 Registering asynchronous event callbacks... 00:19:10.829 Starting namespace attribute notice tests for all controllers... 00:19:10.829 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:10.829 aer_cb - Changed Namespace 00:19:10.829 Cleaning up... 00:19:11.088 [ 00:19:11.088 { 00:19:11.088 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:11.088 "subtype": "Discovery", 00:19:11.088 "listen_addresses": [], 00:19:11.088 "allow_any_host": true, 00:19:11.088 "hosts": [] 00:19:11.088 }, 00:19:11.088 { 00:19:11.088 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:11.088 "subtype": "NVMe", 00:19:11.088 "listen_addresses": [ 00:19:11.088 { 00:19:11.088 "trtype": "VFIOUSER", 00:19:11.088 "adrfam": "IPv4", 00:19:11.088 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:11.088 "trsvcid": "0" 00:19:11.088 } 00:19:11.088 ], 00:19:11.088 "allow_any_host": true, 00:19:11.088 "hosts": [], 00:19:11.088 "serial_number": "SPDK1", 00:19:11.088 "model_number": "SPDK bdev Controller", 00:19:11.088 "max_namespaces": 32, 00:19:11.088 "min_cntlid": 1, 00:19:11.088 "max_cntlid": 65519, 00:19:11.088 "namespaces": [ 00:19:11.088 { 00:19:11.088 "nsid": 1, 00:19:11.088 "bdev_name": "Malloc1", 00:19:11.088 "name": "Malloc1", 00:19:11.088 "nguid": "1D6E60B5D3A44BE4B15AC6094E0A88BB", 00:19:11.088 "uuid": "1d6e60b5-d3a4-4be4-b15a-c6094e0a88bb" 00:19:11.088 }, 00:19:11.088 { 00:19:11.088 "nsid": 2, 00:19:11.088 "bdev_name": "Malloc3", 00:19:11.088 "name": "Malloc3", 00:19:11.088 "nguid": "6019F65507FB48B6986C93B9F08879A7", 00:19:11.088 "uuid": "6019f655-07fb-48b6-986c-93b9f08879a7" 00:19:11.088 } 00:19:11.088 ] 00:19:11.088 }, 00:19:11.088 { 00:19:11.088 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:11.088 "subtype": "NVMe", 00:19:11.088 "listen_addresses": [ 00:19:11.088 { 00:19:11.088 "trtype": "VFIOUSER", 00:19:11.088 "adrfam": "IPv4", 00:19:11.088 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:11.088 "trsvcid": "0" 00:19:11.088 } 00:19:11.088 ], 00:19:11.088 "allow_any_host": true, 00:19:11.088 "hosts": [], 00:19:11.088 "serial_number": "SPDK2", 00:19:11.088 "model_number": "SPDK bdev Controller", 00:19:11.088 "max_namespaces": 32, 00:19:11.088 "min_cntlid": 1, 00:19:11.088 "max_cntlid": 65519, 00:19:11.088 "namespaces": [ 00:19:11.088 { 00:19:11.088 "nsid": 1, 00:19:11.088 "bdev_name": "Malloc2", 00:19:11.088 "name": "Malloc2", 00:19:11.088 "nguid": "C5E4D067D727442681FC75E036C267B2", 00:19:11.088 "uuid": "c5e4d067-d727-4426-81fc-75e036c267b2" 00:19:11.088 }, 00:19:11.088 { 00:19:11.088 "nsid": 2, 00:19:11.088 "bdev_name": "Malloc4", 00:19:11.089 "name": "Malloc4", 00:19:11.089 "nguid": "BAE9CA880749475DA13E7E92883F5188", 00:19:11.089 "uuid": "bae9ca88-0749-475d-a13e-7e92883f5188" 00:19:11.089 } 00:19:11.089 ] 00:19:11.089 } 00:19:11.089 ] 00:19:11.089 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 972898 00:19:11.089 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:11.089 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 965269 00:19:11.089 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 965269 ']' 00:19:11.089 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 965269 00:19:11.089 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:11.089 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.089 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 965269 00:19:11.089 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:11.089 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:11.089 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 965269' 00:19:11.089 killing process with pid 965269 00:19:11.089 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 965269 00:19:11.089 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 965269 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=972926 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 972926' 00:19:11.348 Process pid: 972926 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 972926 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 972926 ']' 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.348 02:40:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:11.348 [2024-12-16 02:40:41.868954] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:11.348 [2024-12-16 02:40:41.869772] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:11.348 [2024-12-16 02:40:41.869806] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.348 [2024-12-16 02:40:41.941826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.348 [2024-12-16 02:40:41.963397] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.348 [2024-12-16 02:40:41.963434] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.348 [2024-12-16 02:40:41.963442] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.348 [2024-12-16 02:40:41.963447] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.348 [2024-12-16 02:40:41.963452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.348 [2024-12-16 02:40:41.964876] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.348 [2024-12-16 02:40:41.964985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.348 [2024-12-16 02:40:41.965114] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.348 [2024-12-16 02:40:41.965114] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.607 [2024-12-16 02:40:42.028290] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:11.607 [2024-12-16 02:40:42.028675] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:11.607 [2024-12-16 02:40:42.029229] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:11.607 [2024-12-16 02:40:42.029653] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:11.607 [2024-12-16 02:40:42.029694] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:11.607 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.607 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:11.607 02:40:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:12.544 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:12.803 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:12.803 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:12.803 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:12.803 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:12.803 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:13.062 Malloc1 00:19:13.062 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:13.062 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:13.321 02:40:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:13.579 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:13.579 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:13.579 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:13.838 Malloc2 00:19:13.838 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:13.838 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:14.097 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:14.356 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:14.356 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 972926 00:19:14.356 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 972926 ']' 00:19:14.356 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 972926 00:19:14.356 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:14.356 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.356 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 972926 00:19:14.356 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:14.356 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:14.356 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 972926' 00:19:14.356 killing process with pid 972926 00:19:14.356 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 972926 00:19:14.356 02:40:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 972926 00:19:14.615 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:14.615 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:14.615 00:19:14.615 real 0m51.551s 00:19:14.615 user 3m19.578s 00:19:14.615 sys 0m3.277s 00:19:14.615 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:14.615 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:14.615 ************************************ 00:19:14.615 END TEST nvmf_vfio_user 00:19:14.615 ************************************ 00:19:14.615 02:40:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:14.615 02:40:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:14.615 02:40:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:14.615 02:40:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:14.615 ************************************ 00:19:14.615 START TEST nvmf_vfio_user_nvme_compliance 00:19:14.615 ************************************ 00:19:14.615 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:14.874 * Looking for test storage... 00:19:14.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:14.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.874 --rc genhtml_branch_coverage=1 00:19:14.874 --rc genhtml_function_coverage=1 00:19:14.874 --rc genhtml_legend=1 00:19:14.874 --rc geninfo_all_blocks=1 00:19:14.874 --rc geninfo_unexecuted_blocks=1 00:19:14.874 00:19:14.874 ' 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:14.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.874 --rc genhtml_branch_coverage=1 00:19:14.874 --rc genhtml_function_coverage=1 00:19:14.874 --rc genhtml_legend=1 00:19:14.874 --rc geninfo_all_blocks=1 00:19:14.874 --rc geninfo_unexecuted_blocks=1 00:19:14.874 00:19:14.874 ' 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:14.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.874 --rc genhtml_branch_coverage=1 00:19:14.874 --rc genhtml_function_coverage=1 00:19:14.874 --rc genhtml_legend=1 00:19:14.874 --rc geninfo_all_blocks=1 00:19:14.874 --rc geninfo_unexecuted_blocks=1 00:19:14.874 00:19:14.874 ' 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:14.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:14.874 --rc genhtml_branch_coverage=1 00:19:14.874 --rc genhtml_function_coverage=1 00:19:14.874 --rc genhtml_legend=1 00:19:14.874 --rc geninfo_all_blocks=1 00:19:14.874 --rc geninfo_unexecuted_blocks=1 00:19:14.874 00:19:14.874 ' 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:14.874 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:14.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=973666 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 973666' 00:19:14.875 Process pid: 973666 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 973666 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 973666 ']' 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.875 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:14.875 [2024-12-16 02:40:45.480233] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:14.875 [2024-12-16 02:40:45.480278] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.134 [2024-12-16 02:40:45.552903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:15.134 [2024-12-16 02:40:45.574464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.134 [2024-12-16 02:40:45.574499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.134 [2024-12-16 02:40:45.574506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.134 [2024-12-16 02:40:45.574512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.134 [2024-12-16 02:40:45.574518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.134 [2024-12-16 02:40:45.575824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.134 [2024-12-16 02:40:45.575950] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.134 [2024-12-16 02:40:45.575951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.134 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.134 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:15.134 02:40:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:16.070 malloc0 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.070 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:16.329 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.329 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:16.329 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.329 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:16.329 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.329 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:16.329 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.329 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:16.329 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.329 02:40:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:16.329 00:19:16.329 00:19:16.329 CUnit - A unit testing framework for C - Version 2.1-3 00:19:16.329 http://cunit.sourceforge.net/ 00:19:16.329 00:19:16.329 00:19:16.329 Suite: nvme_compliance 00:19:16.329 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-16 02:40:46.913390] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.329 [2024-12-16 02:40:46.914715] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:16.329 [2024-12-16 02:40:46.914730] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:16.329 [2024-12-16 02:40:46.914737] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:16.329 [2024-12-16 02:40:46.916409] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.329 passed 00:19:16.587 Test: admin_identify_ctrlr_verify_fused ...[2024-12-16 02:40:46.993934] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.588 [2024-12-16 02:40:46.997952] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.588 passed 00:19:16.588 Test: admin_identify_ns ...[2024-12-16 02:40:47.077169] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.588 [2024-12-16 02:40:47.136858] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:16.588 [2024-12-16 02:40:47.144861] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:16.588 [2024-12-16 02:40:47.165951] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.588 passed 00:19:16.588 Test: admin_get_features_mandatory_features ...[2024-12-16 02:40:47.239697] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.588 [2024-12-16 02:40:47.244739] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.845 passed 00:19:16.845 Test: admin_get_features_optional_features ...[2024-12-16 02:40:47.323282] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:16.845 [2024-12-16 02:40:47.326302] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:16.845 passed 00:19:16.845 Test: admin_set_features_number_of_queues ...[2024-12-16 02:40:47.399994] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:17.104 [2024-12-16 02:40:47.505952] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:17.104 passed 00:19:17.104 Test: admin_get_log_page_mandatory_logs ...[2024-12-16 02:40:47.581725] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:17.104 [2024-12-16 02:40:47.584747] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:17.104 passed 00:19:17.104 Test: admin_get_log_page_with_lpo ...[2024-12-16 02:40:47.662469] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:17.104 [2024-12-16 02:40:47.730861] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:17.104 [2024-12-16 02:40:47.743927] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:17.362 passed 00:19:17.362 Test: fabric_property_get ...[2024-12-16 02:40:47.817789] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:17.362 [2024-12-16 02:40:47.819021] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:17.362 [2024-12-16 02:40:47.820811] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:17.362 passed 00:19:17.362 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-16 02:40:47.901373] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:17.362 [2024-12-16 02:40:47.902601] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:17.362 [2024-12-16 02:40:47.904397] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:17.362 passed 00:19:17.362 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-16 02:40:47.979128] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:17.620 [2024-12-16 02:40:48.065852] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:17.620 [2024-12-16 02:40:48.082857] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:17.620 [2024-12-16 02:40:48.087933] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:17.620 passed 00:19:17.620 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-16 02:40:48.163707] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:17.620 [2024-12-16 02:40:48.164940] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:17.620 [2024-12-16 02:40:48.166728] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:17.620 passed 00:19:17.620 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-16 02:40:48.241508] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:17.880 [2024-12-16 02:40:48.316863] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:17.880 [2024-12-16 02:40:48.340852] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:17.880 [2024-12-16 02:40:48.345934] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:17.880 passed 00:19:17.880 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-16 02:40:48.421519] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:17.880 [2024-12-16 02:40:48.422742] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:17.880 [2024-12-16 02:40:48.422769] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:17.880 [2024-12-16 02:40:48.426552] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:17.880 passed 00:19:17.880 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-16 02:40:48.502098] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:18.138 [2024-12-16 02:40:48.594857] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:18.138 [2024-12-16 02:40:48.602859] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:18.138 [2024-12-16 02:40:48.610859] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:18.138 [2024-12-16 02:40:48.618858] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:18.138 [2024-12-16 02:40:48.647947] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:18.138 passed 00:19:18.138 Test: admin_create_io_sq_verify_pc ...[2024-12-16 02:40:48.723723] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:18.138 [2024-12-16 02:40:48.740865] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:18.138 [2024-12-16 02:40:48.758895] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:18.138 passed 00:19:18.397 Test: admin_create_io_qp_max_qps ...[2024-12-16 02:40:48.833395] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:19.332 [2024-12-16 02:40:49.924857] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:19.898 [2024-12-16 02:40:50.309455] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:19.898 passed 00:19:19.898 Test: admin_create_io_sq_shared_cq ...[2024-12-16 02:40:50.382549] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:19.898 [2024-12-16 02:40:50.513857] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:19.899 [2024-12-16 02:40:50.550918] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:20.157 passed 00:19:20.157 00:19:20.157 Run Summary: Type Total Ran Passed Failed Inactive 00:19:20.157 suites 1 1 n/a 0 0 00:19:20.157 tests 18 18 18 0 0 00:19:20.157 asserts 360 360 360 0 n/a 00:19:20.157 00:19:20.157 Elapsed time = 1.494 seconds 00:19:20.157 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 973666 00:19:20.158 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 973666 ']' 00:19:20.158 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 973666 00:19:20.158 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:20.158 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:20.158 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973666 00:19:20.158 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:20.158 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:20.158 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973666' 00:19:20.158 killing process with pid 973666 00:19:20.158 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 973666 00:19:20.158 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 973666 00:19:20.158 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:20.416 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:20.416 00:19:20.416 real 0m5.603s 00:19:20.416 user 0m15.705s 00:19:20.416 sys 0m0.506s 00:19:20.416 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:20.416 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:20.416 ************************************ 00:19:20.416 END TEST nvmf_vfio_user_nvme_compliance 00:19:20.416 ************************************ 00:19:20.416 02:40:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:20.416 02:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:20.416 02:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:20.416 02:40:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:20.416 ************************************ 00:19:20.416 START TEST nvmf_vfio_user_fuzz 00:19:20.416 ************************************ 00:19:20.416 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:20.416 * Looking for test storage... 00:19:20.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:20.416 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:20.416 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:19:20.416 02:40:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:20.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.416 --rc genhtml_branch_coverage=1 00:19:20.416 --rc genhtml_function_coverage=1 00:19:20.416 --rc genhtml_legend=1 00:19:20.416 --rc geninfo_all_blocks=1 00:19:20.416 --rc geninfo_unexecuted_blocks=1 00:19:20.416 00:19:20.416 ' 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:20.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.416 --rc genhtml_branch_coverage=1 00:19:20.416 --rc genhtml_function_coverage=1 00:19:20.416 --rc genhtml_legend=1 00:19:20.416 --rc geninfo_all_blocks=1 00:19:20.416 --rc geninfo_unexecuted_blocks=1 00:19:20.416 00:19:20.416 ' 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:20.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.416 --rc genhtml_branch_coverage=1 00:19:20.416 --rc genhtml_function_coverage=1 00:19:20.416 --rc genhtml_legend=1 00:19:20.416 --rc geninfo_all_blocks=1 00:19:20.416 --rc geninfo_unexecuted_blocks=1 00:19:20.416 00:19:20.416 ' 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:20.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:20.416 --rc genhtml_branch_coverage=1 00:19:20.416 --rc genhtml_function_coverage=1 00:19:20.416 --rc genhtml_legend=1 00:19:20.416 --rc geninfo_all_blocks=1 00:19:20.416 --rc geninfo_unexecuted_blocks=1 00:19:20.416 00:19:20.416 ' 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:20.416 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:20.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:20.675 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:20.676 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:20.676 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:20.676 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=974624 00:19:20.676 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 974624' 00:19:20.676 Process pid: 974624 00:19:20.676 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:20.676 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:20.676 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 974624 00:19:20.676 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 974624 ']' 00:19:20.676 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.676 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:20.676 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.676 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:20.676 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:20.934 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:20.934 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:20.934 02:40:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:21.870 malloc0 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:21.870 02:40:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:53.954 Fuzzing completed. Shutting down the fuzz application 00:19:53.954 00:19:53.954 Dumping successful admin opcodes: 00:19:53.954 9, 10, 00:19:53.954 Dumping successful io opcodes: 00:19:53.954 0, 00:19:53.954 NS: 0x20000081ef00 I/O qp, Total commands completed: 1003915, total successful commands: 3937, random_seed: 2163918592 00:19:53.954 NS: 0x20000081ef00 admin qp, Total commands completed: 246736, total successful commands: 57, random_seed: 4089444224 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 974624 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 974624 ']' 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 974624 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 974624 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 974624' 00:19:53.954 killing process with pid 974624 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 974624 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 974624 00:19:53.954 02:41:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:53.954 00:19:53.954 real 0m32.172s 00:19:53.954 user 0m29.997s 00:19:53.954 sys 0m30.886s 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:53.954 ************************************ 00:19:53.954 END TEST nvmf_vfio_user_fuzz 00:19:53.954 ************************************ 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:53.954 ************************************ 00:19:53.954 START TEST nvmf_auth_target 00:19:53.954 ************************************ 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:53.954 * Looking for test storage... 00:19:53.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:53.954 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:53.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.955 --rc genhtml_branch_coverage=1 00:19:53.955 --rc genhtml_function_coverage=1 00:19:53.955 --rc genhtml_legend=1 00:19:53.955 --rc geninfo_all_blocks=1 00:19:53.955 --rc geninfo_unexecuted_blocks=1 00:19:53.955 00:19:53.955 ' 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:53.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.955 --rc genhtml_branch_coverage=1 00:19:53.955 --rc genhtml_function_coverage=1 00:19:53.955 --rc genhtml_legend=1 00:19:53.955 --rc geninfo_all_blocks=1 00:19:53.955 --rc geninfo_unexecuted_blocks=1 00:19:53.955 00:19:53.955 ' 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:53.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.955 --rc genhtml_branch_coverage=1 00:19:53.955 --rc genhtml_function_coverage=1 00:19:53.955 --rc genhtml_legend=1 00:19:53.955 --rc geninfo_all_blocks=1 00:19:53.955 --rc geninfo_unexecuted_blocks=1 00:19:53.955 00:19:53.955 ' 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:53.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.955 --rc genhtml_branch_coverage=1 00:19:53.955 --rc genhtml_function_coverage=1 00:19:53.955 --rc genhtml_legend=1 00:19:53.955 --rc geninfo_all_blocks=1 00:19:53.955 --rc geninfo_unexecuted_blocks=1 00:19:53.955 00:19:53.955 ' 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:53.955 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:53.955 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:53.956 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:53.956 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:53.956 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:53.956 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:53.956 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:53.956 02:41:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.234 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.234 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:59.234 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:59.234 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:59.234 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:59.234 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:59.234 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:59.234 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:59.234 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:59.235 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:59.235 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:59.235 Found net devices under 0000:af:00.0: cvl_0_0 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:59.235 Found net devices under 0000:af:00.1: cvl_0_1 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.235 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:59.236 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:59.236 02:41:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:59.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:19:59.236 00:19:59.236 --- 10.0.0.2 ping statistics --- 00:19:59.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.236 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:19:59.236 00:19:59.236 --- 10.0.0.1 ping statistics --- 00:19:59.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.236 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=982851 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 982851 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 982851 ']' 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=982960 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=074bc005d527d973c1d991cb39044efc43e4c57deb65a5ca 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.1Ri 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 074bc005d527d973c1d991cb39044efc43e4c57deb65a5ca 0 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 074bc005d527d973c1d991cb39044efc43e4c57deb65a5ca 0 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=074bc005d527d973c1d991cb39044efc43e4c57deb65a5ca 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.1Ri 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.1Ri 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.1Ri 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.236 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=60f2735c58f524589c83dc00d612dc16f2cefcb768dbf65ea3b321790db58684 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.mTC 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 60f2735c58f524589c83dc00d612dc16f2cefcb768dbf65ea3b321790db58684 3 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 60f2735c58f524589c83dc00d612dc16f2cefcb768dbf65ea3b321790db58684 3 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=60f2735c58f524589c83dc00d612dc16f2cefcb768dbf65ea3b321790db58684 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.mTC 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.mTC 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.mTC 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a6571a5ac63c4809942d864ed723d4f4 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qRC 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a6571a5ac63c4809942d864ed723d4f4 1 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a6571a5ac63c4809942d864ed723d4f4 1 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a6571a5ac63c4809942d864ed723d4f4 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qRC 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qRC 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.qRC 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=5617023171cd712bffd5b6dc3d41ee26b50b662cac3a76dc 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.bsT 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 5617023171cd712bffd5b6dc3d41ee26b50b662cac3a76dc 2 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 5617023171cd712bffd5b6dc3d41ee26b50b662cac3a76dc 2 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=5617023171cd712bffd5b6dc3d41ee26b50b662cac3a76dc 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.bsT 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.bsT 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.bsT 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6afa3bfc042a7bfa6117e755bf74b83922c44d27399e7374 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.69x 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6afa3bfc042a7bfa6117e755bf74b83922c44d27399e7374 2 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6afa3bfc042a7bfa6117e755bf74b83922c44d27399e7374 2 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6afa3bfc042a7bfa6117e755bf74b83922c44d27399e7374 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.69x 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.69x 00:19:59.237 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.69x 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=790070f1ef9df75ab0e1bcf8e49f14c9 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.fjL 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 790070f1ef9df75ab0e1bcf8e49f14c9 1 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 790070f1ef9df75ab0e1bcf8e49f14c9 1 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=790070f1ef9df75ab0e1bcf8e49f14c9 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:59.238 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.fjL 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.fjL 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.fjL 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3cd4fab0a3c675583f6a7cd6e94a442175684c7f2f7931bd7da770aeacf87a54 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bdE 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3cd4fab0a3c675583f6a7cd6e94a442175684c7f2f7931bd7da770aeacf87a54 3 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3cd4fab0a3c675583f6a7cd6e94a442175684c7f2f7931bd7da770aeacf87a54 3 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3cd4fab0a3c675583f6a7cd6e94a442175684c7f2f7931bd7da770aeacf87a54 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bdE 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bdE 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.bdE 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 982851 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 982851 ']' 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.539 02:41:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.858 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.858 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:59.858 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 982960 /var/tmp/host.sock 00:19:59.858 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 982960 ']' 00:19:59.858 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:59.858 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:59.858 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:59.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1Ri 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1Ri 00:19:59.859 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1Ri 00:20:00.123 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.mTC ]] 00:20:00.123 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mTC 00:20:00.123 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.123 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.123 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.123 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mTC 00:20:00.123 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mTC 00:20:00.382 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:00.382 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.qRC 00:20:00.382 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.382 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.382 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.382 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.qRC 00:20:00.382 02:41:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.qRC 00:20:00.640 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.bsT ]] 00:20:00.640 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bsT 00:20:00.640 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.640 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.641 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.641 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bsT 00:20:00.641 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bsT 00:20:00.641 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:00.641 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.69x 00:20:00.641 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.641 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.641 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.641 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.69x 00:20:00.641 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.69x 00:20:00.899 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.fjL ]] 00:20:00.899 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fjL 00:20:00.899 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.899 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.899 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.899 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fjL 00:20:00.899 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fjL 00:20:01.157 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:01.157 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bdE 00:20:01.157 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.157 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.157 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.157 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.bdE 00:20:01.157 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.bdE 00:20:01.157 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:01.157 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:01.157 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:01.157 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.157 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:01.157 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:01.414 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:01.414 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.414 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:01.414 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:01.414 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:01.414 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.414 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.414 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.414 02:41:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.414 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.414 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.414 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.414 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:01.671 00:20:01.671 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:01.671 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:01.671 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.930 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.931 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.931 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.931 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.931 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.931 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:01.931 { 00:20:01.931 "cntlid": 1, 00:20:01.931 "qid": 0, 00:20:01.931 "state": "enabled", 00:20:01.931 "thread": "nvmf_tgt_poll_group_000", 00:20:01.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:01.931 "listen_address": { 00:20:01.931 "trtype": "TCP", 00:20:01.931 "adrfam": "IPv4", 00:20:01.931 "traddr": "10.0.0.2", 00:20:01.931 "trsvcid": "4420" 00:20:01.931 }, 00:20:01.931 "peer_address": { 00:20:01.931 "trtype": "TCP", 00:20:01.931 "adrfam": "IPv4", 00:20:01.931 "traddr": "10.0.0.1", 00:20:01.931 "trsvcid": "58484" 00:20:01.931 }, 00:20:01.931 "auth": { 00:20:01.931 "state": "completed", 00:20:01.931 "digest": "sha256", 00:20:01.931 "dhgroup": "null" 00:20:01.931 } 00:20:01.931 } 00:20:01.931 ]' 00:20:01.931 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.931 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.931 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.931 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:01.931 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.189 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.189 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.189 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.189 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:02.190 02:41:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:02.756 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.757 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:02.757 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.757 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.757 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.757 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.757 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:02.757 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:03.015 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:03.015 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.015 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.015 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:03.015 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:03.015 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.015 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.015 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.015 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.015 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.015 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.015 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.015 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:03.274 00:20:03.274 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.274 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.274 02:41:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.532 { 00:20:03.532 "cntlid": 3, 00:20:03.532 "qid": 0, 00:20:03.532 "state": "enabled", 00:20:03.532 "thread": "nvmf_tgt_poll_group_000", 00:20:03.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:03.532 "listen_address": { 00:20:03.532 "trtype": "TCP", 00:20:03.532 "adrfam": "IPv4", 00:20:03.532 "traddr": "10.0.0.2", 00:20:03.532 "trsvcid": "4420" 00:20:03.532 }, 00:20:03.532 "peer_address": { 00:20:03.532 "trtype": "TCP", 00:20:03.532 "adrfam": "IPv4", 00:20:03.532 "traddr": "10.0.0.1", 00:20:03.532 "trsvcid": "58522" 00:20:03.532 }, 00:20:03.532 "auth": { 00:20:03.532 "state": "completed", 00:20:03.532 "digest": "sha256", 00:20:03.532 "dhgroup": "null" 00:20:03.532 } 00:20:03.532 } 00:20:03.532 ]' 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.532 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.791 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:03.791 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:04.358 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.358 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:04.358 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.358 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.358 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.358 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.358 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:04.358 02:41:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:04.619 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:04.619 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.619 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:04.619 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:04.619 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:04.619 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.619 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.619 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.619 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.619 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.619 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.619 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.619 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:04.879 00:20:04.879 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:04.879 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.879 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.137 { 00:20:05.137 "cntlid": 5, 00:20:05.137 "qid": 0, 00:20:05.137 "state": "enabled", 00:20:05.137 "thread": "nvmf_tgt_poll_group_000", 00:20:05.137 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:05.137 "listen_address": { 00:20:05.137 "trtype": "TCP", 00:20:05.137 "adrfam": "IPv4", 00:20:05.137 "traddr": "10.0.0.2", 00:20:05.137 "trsvcid": "4420" 00:20:05.137 }, 00:20:05.137 "peer_address": { 00:20:05.137 "trtype": "TCP", 00:20:05.137 "adrfam": "IPv4", 00:20:05.137 "traddr": "10.0.0.1", 00:20:05.137 "trsvcid": "58542" 00:20:05.137 }, 00:20:05.137 "auth": { 00:20:05.137 "state": "completed", 00:20:05.137 "digest": "sha256", 00:20:05.137 "dhgroup": "null" 00:20:05.137 } 00:20:05.137 } 00:20:05.137 ]' 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.137 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.396 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:05.396 02:41:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:05.963 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.963 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:05.963 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.963 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.963 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.963 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.963 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:05.963 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:06.222 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:06.222 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.222 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.222 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:06.222 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:06.222 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.222 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:06.222 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.222 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.222 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.222 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:06.222 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.222 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:06.481 00:20:06.481 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.481 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.481 02:41:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:06.740 { 00:20:06.740 "cntlid": 7, 00:20:06.740 "qid": 0, 00:20:06.740 "state": "enabled", 00:20:06.740 "thread": "nvmf_tgt_poll_group_000", 00:20:06.740 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:06.740 "listen_address": { 00:20:06.740 "trtype": "TCP", 00:20:06.740 "adrfam": "IPv4", 00:20:06.740 "traddr": "10.0.0.2", 00:20:06.740 "trsvcid": "4420" 00:20:06.740 }, 00:20:06.740 "peer_address": { 00:20:06.740 "trtype": "TCP", 00:20:06.740 "adrfam": "IPv4", 00:20:06.740 "traddr": "10.0.0.1", 00:20:06.740 "trsvcid": "58574" 00:20:06.740 }, 00:20:06.740 "auth": { 00:20:06.740 "state": "completed", 00:20:06.740 "digest": "sha256", 00:20:06.740 "dhgroup": "null" 00:20:06.740 } 00:20:06.740 } 00:20:06.740 ]' 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.740 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.999 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:06.999 02:41:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:07.566 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.566 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:07.566 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.566 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.566 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.566 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.567 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.567 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:07.567 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:07.825 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:07.825 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.825 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:07.825 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:07.825 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:07.825 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.825 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.825 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.825 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.825 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.825 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.825 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:07.825 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.084 00:20:08.084 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.084 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.084 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.084 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.084 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.084 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.084 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.084 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.343 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.343 { 00:20:08.343 "cntlid": 9, 00:20:08.343 "qid": 0, 00:20:08.343 "state": "enabled", 00:20:08.343 "thread": "nvmf_tgt_poll_group_000", 00:20:08.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:08.343 "listen_address": { 00:20:08.343 "trtype": "TCP", 00:20:08.343 "adrfam": "IPv4", 00:20:08.343 "traddr": "10.0.0.2", 00:20:08.343 "trsvcid": "4420" 00:20:08.343 }, 00:20:08.343 "peer_address": { 00:20:08.343 "trtype": "TCP", 00:20:08.343 "adrfam": "IPv4", 00:20:08.343 "traddr": "10.0.0.1", 00:20:08.343 "trsvcid": "58612" 00:20:08.343 }, 00:20:08.343 "auth": { 00:20:08.343 "state": "completed", 00:20:08.343 "digest": "sha256", 00:20:08.343 "dhgroup": "ffdhe2048" 00:20:08.343 } 00:20:08.343 } 00:20:08.343 ]' 00:20:08.343 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.343 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.343 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.343 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:08.343 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.343 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.343 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.343 02:41:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.602 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:08.602 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.170 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.170 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.429 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.429 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.429 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.429 02:41:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:09.429 00:20:09.429 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.429 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.429 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.687 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.687 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.687 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.687 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.687 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.687 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.687 { 00:20:09.687 "cntlid": 11, 00:20:09.687 "qid": 0, 00:20:09.687 "state": "enabled", 00:20:09.687 "thread": "nvmf_tgt_poll_group_000", 00:20:09.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:09.687 "listen_address": { 00:20:09.687 "trtype": "TCP", 00:20:09.687 "adrfam": "IPv4", 00:20:09.687 "traddr": "10.0.0.2", 00:20:09.687 "trsvcid": "4420" 00:20:09.687 }, 00:20:09.687 "peer_address": { 00:20:09.687 "trtype": "TCP", 00:20:09.687 "adrfam": "IPv4", 00:20:09.687 "traddr": "10.0.0.1", 00:20:09.687 "trsvcid": "58638" 00:20:09.687 }, 00:20:09.687 "auth": { 00:20:09.687 "state": "completed", 00:20:09.687 "digest": "sha256", 00:20:09.687 "dhgroup": "ffdhe2048" 00:20:09.687 } 00:20:09.687 } 00:20:09.687 ]' 00:20:09.687 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.946 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.946 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.946 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:09.946 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.946 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.946 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.946 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.205 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:10.205 02:41:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:10.772 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.772 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:10.772 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.773 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:11.032 00:20:11.032 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.032 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.032 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.290 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.290 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.290 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.290 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.290 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.290 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.290 { 00:20:11.290 "cntlid": 13, 00:20:11.290 "qid": 0, 00:20:11.290 "state": "enabled", 00:20:11.290 "thread": "nvmf_tgt_poll_group_000", 00:20:11.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:11.290 "listen_address": { 00:20:11.290 "trtype": "TCP", 00:20:11.290 "adrfam": "IPv4", 00:20:11.290 "traddr": "10.0.0.2", 00:20:11.290 "trsvcid": "4420" 00:20:11.290 }, 00:20:11.290 "peer_address": { 00:20:11.290 "trtype": "TCP", 00:20:11.290 "adrfam": "IPv4", 00:20:11.290 "traddr": "10.0.0.1", 00:20:11.290 "trsvcid": "42088" 00:20:11.290 }, 00:20:11.290 "auth": { 00:20:11.290 "state": "completed", 00:20:11.290 "digest": "sha256", 00:20:11.290 "dhgroup": "ffdhe2048" 00:20:11.290 } 00:20:11.290 } 00:20:11.290 ]' 00:20:11.290 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.549 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.549 02:41:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.549 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:11.549 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.549 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.549 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.549 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.808 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:11.808 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:12.374 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.375 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:12.375 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.375 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.375 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.375 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.375 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.375 02:41:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:12.375 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:12.375 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.375 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.375 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:12.375 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:12.375 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.375 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:12.375 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.375 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.634 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.634 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:12.634 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.634 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:12.634 00:20:12.893 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.893 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.893 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.893 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.893 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.893 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.893 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.893 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.893 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.893 { 00:20:12.893 "cntlid": 15, 00:20:12.893 "qid": 0, 00:20:12.893 "state": "enabled", 00:20:12.893 "thread": "nvmf_tgt_poll_group_000", 00:20:12.893 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:12.893 "listen_address": { 00:20:12.893 "trtype": "TCP", 00:20:12.893 "adrfam": "IPv4", 00:20:12.893 "traddr": "10.0.0.2", 00:20:12.893 "trsvcid": "4420" 00:20:12.893 }, 00:20:12.893 "peer_address": { 00:20:12.893 "trtype": "TCP", 00:20:12.893 "adrfam": "IPv4", 00:20:12.893 "traddr": "10.0.0.1", 00:20:12.893 "trsvcid": "42112" 00:20:12.893 }, 00:20:12.893 "auth": { 00:20:12.893 "state": "completed", 00:20:12.893 "digest": "sha256", 00:20:12.893 "dhgroup": "ffdhe2048" 00:20:12.893 } 00:20:12.893 } 00:20:12.893 ]' 00:20:12.893 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.151 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.151 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.151 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:13.151 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.151 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.151 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.151 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.410 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:13.410 02:41:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.978 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.237 00:20:14.237 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.237 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.237 02:41:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.496 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.496 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.496 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.496 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.496 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.496 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.496 { 00:20:14.496 "cntlid": 17, 00:20:14.496 "qid": 0, 00:20:14.496 "state": "enabled", 00:20:14.496 "thread": "nvmf_tgt_poll_group_000", 00:20:14.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:14.496 "listen_address": { 00:20:14.496 "trtype": "TCP", 00:20:14.496 "adrfam": "IPv4", 00:20:14.496 "traddr": "10.0.0.2", 00:20:14.496 "trsvcid": "4420" 00:20:14.496 }, 00:20:14.496 "peer_address": { 00:20:14.496 "trtype": "TCP", 00:20:14.496 "adrfam": "IPv4", 00:20:14.496 "traddr": "10.0.0.1", 00:20:14.496 "trsvcid": "42130" 00:20:14.496 }, 00:20:14.496 "auth": { 00:20:14.496 "state": "completed", 00:20:14.496 "digest": "sha256", 00:20:14.496 "dhgroup": "ffdhe3072" 00:20:14.496 } 00:20:14.496 } 00:20:14.496 ]' 00:20:14.496 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.496 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.496 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.754 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:14.754 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.754 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.754 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.754 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.754 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:14.754 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:15.321 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.321 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:15.321 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.321 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.321 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.321 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.321 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:15.321 02:41:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:15.579 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:15.580 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:15.580 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:15.580 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:15.580 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:15.580 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.580 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.580 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.580 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.580 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.580 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.580 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.580 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:15.839 00:20:15.839 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.839 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.839 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.097 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.097 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.097 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.097 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.097 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.097 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.097 { 00:20:16.097 "cntlid": 19, 00:20:16.097 "qid": 0, 00:20:16.097 "state": "enabled", 00:20:16.097 "thread": "nvmf_tgt_poll_group_000", 00:20:16.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.097 "listen_address": { 00:20:16.097 "trtype": "TCP", 00:20:16.097 "adrfam": "IPv4", 00:20:16.097 "traddr": "10.0.0.2", 00:20:16.097 "trsvcid": "4420" 00:20:16.097 }, 00:20:16.097 "peer_address": { 00:20:16.097 "trtype": "TCP", 00:20:16.097 "adrfam": "IPv4", 00:20:16.097 "traddr": "10.0.0.1", 00:20:16.097 "trsvcid": "42162" 00:20:16.097 }, 00:20:16.097 "auth": { 00:20:16.097 "state": "completed", 00:20:16.097 "digest": "sha256", 00:20:16.097 "dhgroup": "ffdhe3072" 00:20:16.097 } 00:20:16.097 } 00:20:16.097 ]' 00:20:16.097 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.097 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.097 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.097 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.097 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.356 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.356 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.356 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.356 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:16.356 02:41:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:16.924 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.924 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.924 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.924 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.924 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.924 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.924 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:16.924 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:17.182 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:17.182 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.182 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.182 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:17.182 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:17.182 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.182 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.183 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.183 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.183 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.183 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.183 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.183 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:17.442 00:20:17.442 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:17.442 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:17.442 02:41:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:17.701 { 00:20:17.701 "cntlid": 21, 00:20:17.701 "qid": 0, 00:20:17.701 "state": "enabled", 00:20:17.701 "thread": "nvmf_tgt_poll_group_000", 00:20:17.701 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:17.701 "listen_address": { 00:20:17.701 "trtype": "TCP", 00:20:17.701 "adrfam": "IPv4", 00:20:17.701 "traddr": "10.0.0.2", 00:20:17.701 "trsvcid": "4420" 00:20:17.701 }, 00:20:17.701 "peer_address": { 00:20:17.701 "trtype": "TCP", 00:20:17.701 "adrfam": "IPv4", 00:20:17.701 "traddr": "10.0.0.1", 00:20:17.701 "trsvcid": "42190" 00:20:17.701 }, 00:20:17.701 "auth": { 00:20:17.701 "state": "completed", 00:20:17.701 "digest": "sha256", 00:20:17.701 "dhgroup": "ffdhe3072" 00:20:17.701 } 00:20:17.701 } 00:20:17.701 ]' 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.701 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.960 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:17.960 02:41:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:18.528 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.528 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:18.528 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.528 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.528 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.528 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.528 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.528 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:18.787 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:18.787 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.787 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.787 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:18.787 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:18.787 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.787 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:18.787 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.787 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.787 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.787 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:18.787 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.787 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:19.046 00:20:19.046 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.046 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.046 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.305 { 00:20:19.305 "cntlid": 23, 00:20:19.305 "qid": 0, 00:20:19.305 "state": "enabled", 00:20:19.305 "thread": "nvmf_tgt_poll_group_000", 00:20:19.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:19.305 "listen_address": { 00:20:19.305 "trtype": "TCP", 00:20:19.305 "adrfam": "IPv4", 00:20:19.305 "traddr": "10.0.0.2", 00:20:19.305 "trsvcid": "4420" 00:20:19.305 }, 00:20:19.305 "peer_address": { 00:20:19.305 "trtype": "TCP", 00:20:19.305 "adrfam": "IPv4", 00:20:19.305 "traddr": "10.0.0.1", 00:20:19.305 "trsvcid": "42228" 00:20:19.305 }, 00:20:19.305 "auth": { 00:20:19.305 "state": "completed", 00:20:19.305 "digest": "sha256", 00:20:19.305 "dhgroup": "ffdhe3072" 00:20:19.305 } 00:20:19.305 } 00:20:19.305 ]' 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.305 02:41:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.564 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:19.564 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:20.132 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.132 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.132 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.132 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.132 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.132 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.132 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.132 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.132 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:20.391 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:20.391 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.391 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.391 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:20.391 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:20.391 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.391 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.391 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.391 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.391 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.391 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.391 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.391 02:41:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.650 00:20:20.650 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.650 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.650 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.909 { 00:20:20.909 "cntlid": 25, 00:20:20.909 "qid": 0, 00:20:20.909 "state": "enabled", 00:20:20.909 "thread": "nvmf_tgt_poll_group_000", 00:20:20.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:20.909 "listen_address": { 00:20:20.909 "trtype": "TCP", 00:20:20.909 "adrfam": "IPv4", 00:20:20.909 "traddr": "10.0.0.2", 00:20:20.909 "trsvcid": "4420" 00:20:20.909 }, 00:20:20.909 "peer_address": { 00:20:20.909 "trtype": "TCP", 00:20:20.909 "adrfam": "IPv4", 00:20:20.909 "traddr": "10.0.0.1", 00:20:20.909 "trsvcid": "46048" 00:20:20.909 }, 00:20:20.909 "auth": { 00:20:20.909 "state": "completed", 00:20:20.909 "digest": "sha256", 00:20:20.909 "dhgroup": "ffdhe4096" 00:20:20.909 } 00:20:20.909 } 00:20:20.909 ]' 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.909 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.168 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:21.168 02:41:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:21.736 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.736 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:21.736 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.736 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.736 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.736 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:21.736 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:21.736 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:21.995 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:21.995 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.995 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.995 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:21.995 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.995 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.995 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.995 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.995 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.995 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.995 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.995 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.995 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.254 00:20:22.254 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.254 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.254 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.513 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.513 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.513 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.513 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.513 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.513 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:22.513 { 00:20:22.513 "cntlid": 27, 00:20:22.513 "qid": 0, 00:20:22.513 "state": "enabled", 00:20:22.513 "thread": "nvmf_tgt_poll_group_000", 00:20:22.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:22.513 "listen_address": { 00:20:22.513 "trtype": "TCP", 00:20:22.513 "adrfam": "IPv4", 00:20:22.513 "traddr": "10.0.0.2", 00:20:22.513 "trsvcid": "4420" 00:20:22.513 }, 00:20:22.513 "peer_address": { 00:20:22.513 "trtype": "TCP", 00:20:22.513 "adrfam": "IPv4", 00:20:22.513 "traddr": "10.0.0.1", 00:20:22.513 "trsvcid": "46072" 00:20:22.513 }, 00:20:22.513 "auth": { 00:20:22.513 "state": "completed", 00:20:22.513 "digest": "sha256", 00:20:22.513 "dhgroup": "ffdhe4096" 00:20:22.513 } 00:20:22.513 } 00:20:22.513 ]' 00:20:22.513 02:41:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:22.513 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.513 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:22.513 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:22.513 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:22.514 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.514 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.514 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.772 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:22.772 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:23.340 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.340 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:23.340 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.340 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.340 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.340 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.340 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:23.340 02:41:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:23.600 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:23.600 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.600 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:23.600 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:23.600 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:23.600 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.600 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.600 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.600 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.600 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.600 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.600 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.600 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.858 00:20:23.858 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.858 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.858 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.117 { 00:20:24.117 "cntlid": 29, 00:20:24.117 "qid": 0, 00:20:24.117 "state": "enabled", 00:20:24.117 "thread": "nvmf_tgt_poll_group_000", 00:20:24.117 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.117 "listen_address": { 00:20:24.117 "trtype": "TCP", 00:20:24.117 "adrfam": "IPv4", 00:20:24.117 "traddr": "10.0.0.2", 00:20:24.117 "trsvcid": "4420" 00:20:24.117 }, 00:20:24.117 "peer_address": { 00:20:24.117 "trtype": "TCP", 00:20:24.117 "adrfam": "IPv4", 00:20:24.117 "traddr": "10.0.0.1", 00:20:24.117 "trsvcid": "46108" 00:20:24.117 }, 00:20:24.117 "auth": { 00:20:24.117 "state": "completed", 00:20:24.117 "digest": "sha256", 00:20:24.117 "dhgroup": "ffdhe4096" 00:20:24.117 } 00:20:24.117 } 00:20:24.117 ]' 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.117 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.376 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:24.376 02:41:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:24.946 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.946 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.946 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:24.946 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.946 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.946 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.946 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.946 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:24.946 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:25.205 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:25.205 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.205 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.205 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:25.205 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:25.205 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.205 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:25.205 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.205 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.205 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.205 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:25.205 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.205 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:25.463 00:20:25.463 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.463 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.463 02:41:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.722 { 00:20:25.722 "cntlid": 31, 00:20:25.722 "qid": 0, 00:20:25.722 "state": "enabled", 00:20:25.722 "thread": "nvmf_tgt_poll_group_000", 00:20:25.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:25.722 "listen_address": { 00:20:25.722 "trtype": "TCP", 00:20:25.722 "adrfam": "IPv4", 00:20:25.722 "traddr": "10.0.0.2", 00:20:25.722 "trsvcid": "4420" 00:20:25.722 }, 00:20:25.722 "peer_address": { 00:20:25.722 "trtype": "TCP", 00:20:25.722 "adrfam": "IPv4", 00:20:25.722 "traddr": "10.0.0.1", 00:20:25.722 "trsvcid": "46148" 00:20:25.722 }, 00:20:25.722 "auth": { 00:20:25.722 "state": "completed", 00:20:25.722 "digest": "sha256", 00:20:25.722 "dhgroup": "ffdhe4096" 00:20:25.722 } 00:20:25.722 } 00:20:25.722 ]' 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.722 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.981 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:25.981 02:41:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:26.549 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.549 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:26.549 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.549 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.549 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.549 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.549 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.549 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.549 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:26.808 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:26.808 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:26.808 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:26.808 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:26.808 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:26.808 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.808 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.808 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.808 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.808 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.808 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.808 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.808 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.067 00:20:27.067 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.067 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.067 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.326 { 00:20:27.326 "cntlid": 33, 00:20:27.326 "qid": 0, 00:20:27.326 "state": "enabled", 00:20:27.326 "thread": "nvmf_tgt_poll_group_000", 00:20:27.326 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:27.326 "listen_address": { 00:20:27.326 "trtype": "TCP", 00:20:27.326 "adrfam": "IPv4", 00:20:27.326 "traddr": "10.0.0.2", 00:20:27.326 "trsvcid": "4420" 00:20:27.326 }, 00:20:27.326 "peer_address": { 00:20:27.326 "trtype": "TCP", 00:20:27.326 "adrfam": "IPv4", 00:20:27.326 "traddr": "10.0.0.1", 00:20:27.326 "trsvcid": "46172" 00:20:27.326 }, 00:20:27.326 "auth": { 00:20:27.326 "state": "completed", 00:20:27.326 "digest": "sha256", 00:20:27.326 "dhgroup": "ffdhe6144" 00:20:27.326 } 00:20:27.326 } 00:20:27.326 ]' 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.326 02:41:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.585 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:27.585 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:28.152 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.152 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.152 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.152 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.152 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.152 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.152 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.152 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:28.411 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:28.411 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.411 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:28.412 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:28.412 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:28.412 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.412 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.412 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.412 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.412 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.412 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.412 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.412 02:41:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.670 00:20:28.670 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.670 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.670 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.929 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.929 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.929 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.929 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.929 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.929 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.929 { 00:20:28.929 "cntlid": 35, 00:20:28.929 "qid": 0, 00:20:28.929 "state": "enabled", 00:20:28.929 "thread": "nvmf_tgt_poll_group_000", 00:20:28.930 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:28.930 "listen_address": { 00:20:28.930 "trtype": "TCP", 00:20:28.930 "adrfam": "IPv4", 00:20:28.930 "traddr": "10.0.0.2", 00:20:28.930 "trsvcid": "4420" 00:20:28.930 }, 00:20:28.930 "peer_address": { 00:20:28.930 "trtype": "TCP", 00:20:28.930 "adrfam": "IPv4", 00:20:28.930 "traddr": "10.0.0.1", 00:20:28.930 "trsvcid": "46186" 00:20:28.930 }, 00:20:28.930 "auth": { 00:20:28.930 "state": "completed", 00:20:28.930 "digest": "sha256", 00:20:28.930 "dhgroup": "ffdhe6144" 00:20:28.930 } 00:20:28.930 } 00:20:28.930 ]' 00:20:28.930 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.930 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.930 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.188 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.188 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.188 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.188 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.188 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.447 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:29.447 02:41:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.015 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.274 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.274 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.274 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.274 02:42:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.533 00:20:30.533 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.533 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.533 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.791 { 00:20:30.791 "cntlid": 37, 00:20:30.791 "qid": 0, 00:20:30.791 "state": "enabled", 00:20:30.791 "thread": "nvmf_tgt_poll_group_000", 00:20:30.791 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:30.791 "listen_address": { 00:20:30.791 "trtype": "TCP", 00:20:30.791 "adrfam": "IPv4", 00:20:30.791 "traddr": "10.0.0.2", 00:20:30.791 "trsvcid": "4420" 00:20:30.791 }, 00:20:30.791 "peer_address": { 00:20:30.791 "trtype": "TCP", 00:20:30.791 "adrfam": "IPv4", 00:20:30.791 "traddr": "10.0.0.1", 00:20:30.791 "trsvcid": "59632" 00:20:30.791 }, 00:20:30.791 "auth": { 00:20:30.791 "state": "completed", 00:20:30.791 "digest": "sha256", 00:20:30.791 "dhgroup": "ffdhe6144" 00:20:30.791 } 00:20:30.791 } 00:20:30.791 ]' 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.791 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.050 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:31.050 02:42:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:31.618 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.618 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:31.618 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.618 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.618 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.618 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.618 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:31.618 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:31.877 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:31.877 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.877 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:31.877 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:31.877 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.877 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.877 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:31.877 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.877 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.877 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.877 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.877 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.877 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:32.137 00:20:32.137 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.137 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.137 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.396 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.396 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.396 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.396 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.396 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.396 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.396 { 00:20:32.396 "cntlid": 39, 00:20:32.396 "qid": 0, 00:20:32.396 "state": "enabled", 00:20:32.396 "thread": "nvmf_tgt_poll_group_000", 00:20:32.396 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:32.396 "listen_address": { 00:20:32.396 "trtype": "TCP", 00:20:32.396 "adrfam": "IPv4", 00:20:32.396 "traddr": "10.0.0.2", 00:20:32.396 "trsvcid": "4420" 00:20:32.396 }, 00:20:32.396 "peer_address": { 00:20:32.396 "trtype": "TCP", 00:20:32.396 "adrfam": "IPv4", 00:20:32.396 "traddr": "10.0.0.1", 00:20:32.396 "trsvcid": "59668" 00:20:32.396 }, 00:20:32.396 "auth": { 00:20:32.396 "state": "completed", 00:20:32.396 "digest": "sha256", 00:20:32.396 "dhgroup": "ffdhe6144" 00:20:32.396 } 00:20:32.396 } 00:20:32.396 ]' 00:20:32.396 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.396 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.396 02:42:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.396 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.396 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.396 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.396 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.396 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.655 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:32.655 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:33.222 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.222 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.222 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.222 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.222 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.222 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.222 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.223 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.223 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.223 02:42:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:33.481 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:33.481 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.481 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.481 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:33.481 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.481 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.481 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.482 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.482 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.482 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.482 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.482 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.482 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.049 00:20:34.049 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.049 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.049 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.308 { 00:20:34.308 "cntlid": 41, 00:20:34.308 "qid": 0, 00:20:34.308 "state": "enabled", 00:20:34.308 "thread": "nvmf_tgt_poll_group_000", 00:20:34.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.308 "listen_address": { 00:20:34.308 "trtype": "TCP", 00:20:34.308 "adrfam": "IPv4", 00:20:34.308 "traddr": "10.0.0.2", 00:20:34.308 "trsvcid": "4420" 00:20:34.308 }, 00:20:34.308 "peer_address": { 00:20:34.308 "trtype": "TCP", 00:20:34.308 "adrfam": "IPv4", 00:20:34.308 "traddr": "10.0.0.1", 00:20:34.308 "trsvcid": "59696" 00:20:34.308 }, 00:20:34.308 "auth": { 00:20:34.308 "state": "completed", 00:20:34.308 "digest": "sha256", 00:20:34.308 "dhgroup": "ffdhe8192" 00:20:34.308 } 00:20:34.308 } 00:20:34.308 ]' 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.308 02:42:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.568 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:34.568 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:35.136 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.136 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:35.136 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.136 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.136 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.136 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.136 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:35.136 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:35.394 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:35.394 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.394 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.394 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:35.394 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.394 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.394 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.394 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.394 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.394 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.394 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.394 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.394 02:42:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.962 00:20:35.962 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.962 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.962 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.962 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.962 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.962 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.962 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.962 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.962 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.962 { 00:20:35.962 "cntlid": 43, 00:20:35.962 "qid": 0, 00:20:35.962 "state": "enabled", 00:20:35.962 "thread": "nvmf_tgt_poll_group_000", 00:20:35.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:35.962 "listen_address": { 00:20:35.962 "trtype": "TCP", 00:20:35.962 "adrfam": "IPv4", 00:20:35.962 "traddr": "10.0.0.2", 00:20:35.962 "trsvcid": "4420" 00:20:35.962 }, 00:20:35.962 "peer_address": { 00:20:35.962 "trtype": "TCP", 00:20:35.962 "adrfam": "IPv4", 00:20:35.962 "traddr": "10.0.0.1", 00:20:35.962 "trsvcid": "59710" 00:20:35.962 }, 00:20:35.962 "auth": { 00:20:35.962 "state": "completed", 00:20:35.962 "digest": "sha256", 00:20:35.962 "dhgroup": "ffdhe8192" 00:20:35.962 } 00:20:35.962 } 00:20:35.962 ]' 00:20:35.962 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.962 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.962 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.221 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.221 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.221 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.221 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.221 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.221 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:36.221 02:42:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:36.832 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.832 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:36.832 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.832 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.832 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.832 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.832 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:36.832 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:37.105 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:37.105 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.105 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:37.105 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:37.105 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:37.105 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.105 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.105 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.105 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.105 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.105 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.105 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.105 02:42:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:37.673 00:20:37.673 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.673 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.673 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.673 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.673 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.673 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.673 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.933 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.933 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.933 { 00:20:37.933 "cntlid": 45, 00:20:37.933 "qid": 0, 00:20:37.933 "state": "enabled", 00:20:37.933 "thread": "nvmf_tgt_poll_group_000", 00:20:37.933 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:37.933 "listen_address": { 00:20:37.933 "trtype": "TCP", 00:20:37.933 "adrfam": "IPv4", 00:20:37.933 "traddr": "10.0.0.2", 00:20:37.933 "trsvcid": "4420" 00:20:37.933 }, 00:20:37.933 "peer_address": { 00:20:37.933 "trtype": "TCP", 00:20:37.933 "adrfam": "IPv4", 00:20:37.933 "traddr": "10.0.0.1", 00:20:37.933 "trsvcid": "59734" 00:20:37.933 }, 00:20:37.933 "auth": { 00:20:37.933 "state": "completed", 00:20:37.933 "digest": "sha256", 00:20:37.933 "dhgroup": "ffdhe8192" 00:20:37.933 } 00:20:37.933 } 00:20:37.933 ]' 00:20:37.933 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.933 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.933 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.933 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:37.933 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.933 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.933 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.933 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.192 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:38.192 02:42:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:38.760 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.760 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:38.760 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.760 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.760 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.760 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.760 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:38.760 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:39.018 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:39.018 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.018 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:39.018 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:39.018 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:39.018 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.018 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:39.018 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.018 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.018 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.018 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:39.018 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.018 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:39.585 00:20:39.585 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.585 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.585 02:42:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.585 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.585 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.585 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.585 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.585 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.585 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.585 { 00:20:39.585 "cntlid": 47, 00:20:39.585 "qid": 0, 00:20:39.585 "state": "enabled", 00:20:39.585 "thread": "nvmf_tgt_poll_group_000", 00:20:39.585 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:39.585 "listen_address": { 00:20:39.585 "trtype": "TCP", 00:20:39.585 "adrfam": "IPv4", 00:20:39.585 "traddr": "10.0.0.2", 00:20:39.585 "trsvcid": "4420" 00:20:39.585 }, 00:20:39.585 "peer_address": { 00:20:39.585 "trtype": "TCP", 00:20:39.585 "adrfam": "IPv4", 00:20:39.585 "traddr": "10.0.0.1", 00:20:39.585 "trsvcid": "59756" 00:20:39.585 }, 00:20:39.585 "auth": { 00:20:39.585 "state": "completed", 00:20:39.585 "digest": "sha256", 00:20:39.585 "dhgroup": "ffdhe8192" 00:20:39.585 } 00:20:39.585 } 00:20:39.585 ]' 00:20:39.585 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.585 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.585 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.844 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.844 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.844 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.844 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.844 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.844 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:39.844 02:42:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:40.411 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.412 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:40.412 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.412 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.670 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.671 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:40.929 00:20:40.929 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.929 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.929 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.187 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.188 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.188 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.188 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.188 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.188 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.188 { 00:20:41.188 "cntlid": 49, 00:20:41.188 "qid": 0, 00:20:41.188 "state": "enabled", 00:20:41.188 "thread": "nvmf_tgt_poll_group_000", 00:20:41.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:41.188 "listen_address": { 00:20:41.188 "trtype": "TCP", 00:20:41.188 "adrfam": "IPv4", 00:20:41.188 "traddr": "10.0.0.2", 00:20:41.188 "trsvcid": "4420" 00:20:41.188 }, 00:20:41.188 "peer_address": { 00:20:41.188 "trtype": "TCP", 00:20:41.188 "adrfam": "IPv4", 00:20:41.188 "traddr": "10.0.0.1", 00:20:41.188 "trsvcid": "57320" 00:20:41.188 }, 00:20:41.188 "auth": { 00:20:41.188 "state": "completed", 00:20:41.188 "digest": "sha384", 00:20:41.188 "dhgroup": "null" 00:20:41.188 } 00:20:41.188 } 00:20:41.188 ]' 00:20:41.188 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.188 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.188 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.188 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:41.188 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.447 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.447 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.447 02:42:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.447 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:41.447 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:42.015 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.015 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:42.015 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.015 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.015 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.015 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.015 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.015 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:42.274 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:42.274 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.274 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.274 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:42.274 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:42.274 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.274 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.274 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.274 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.274 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.274 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.274 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.274 02:42:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.533 00:20:42.533 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.533 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.533 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.799 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.799 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.799 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.799 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.799 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.799 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.799 { 00:20:42.799 "cntlid": 51, 00:20:42.799 "qid": 0, 00:20:42.799 "state": "enabled", 00:20:42.799 "thread": "nvmf_tgt_poll_group_000", 00:20:42.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:42.799 "listen_address": { 00:20:42.799 "trtype": "TCP", 00:20:42.799 "adrfam": "IPv4", 00:20:42.799 "traddr": "10.0.0.2", 00:20:42.799 "trsvcid": "4420" 00:20:42.799 }, 00:20:42.799 "peer_address": { 00:20:42.799 "trtype": "TCP", 00:20:42.799 "adrfam": "IPv4", 00:20:42.799 "traddr": "10.0.0.1", 00:20:42.799 "trsvcid": "57352" 00:20:42.799 }, 00:20:42.799 "auth": { 00:20:42.799 "state": "completed", 00:20:42.799 "digest": "sha384", 00:20:42.799 "dhgroup": "null" 00:20:42.799 } 00:20:42.799 } 00:20:42.799 ]' 00:20:42.799 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.799 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.799 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.799 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:42.799 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.060 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.060 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.060 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.060 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:43.060 02:42:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:43.627 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.627 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.627 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:43.627 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.627 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.627 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.627 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.627 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:43.628 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:43.887 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:43.887 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.887 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:43.887 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:43.887 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:43.887 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.887 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.887 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.887 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.887 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.887 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.887 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.887 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.146 00:20:44.146 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.146 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.146 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.404 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.404 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.404 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.404 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.404 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.404 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.404 { 00:20:44.404 "cntlid": 53, 00:20:44.404 "qid": 0, 00:20:44.404 "state": "enabled", 00:20:44.404 "thread": "nvmf_tgt_poll_group_000", 00:20:44.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.404 "listen_address": { 00:20:44.404 "trtype": "TCP", 00:20:44.404 "adrfam": "IPv4", 00:20:44.404 "traddr": "10.0.0.2", 00:20:44.404 "trsvcid": "4420" 00:20:44.404 }, 00:20:44.404 "peer_address": { 00:20:44.404 "trtype": "TCP", 00:20:44.404 "adrfam": "IPv4", 00:20:44.404 "traddr": "10.0.0.1", 00:20:44.404 "trsvcid": "57370" 00:20:44.404 }, 00:20:44.404 "auth": { 00:20:44.404 "state": "completed", 00:20:44.404 "digest": "sha384", 00:20:44.404 "dhgroup": "null" 00:20:44.404 } 00:20:44.404 } 00:20:44.404 ]' 00:20:44.404 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.404 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.404 02:42:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.404 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:44.404 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.663 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.663 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.663 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.663 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:44.663 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:45.230 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.230 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.230 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.230 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.230 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.230 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.230 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.230 02:42:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:45.489 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:45.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:45.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:45.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:45.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:45.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.490 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:45.749 00:20:45.749 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.749 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.749 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.008 { 00:20:46.008 "cntlid": 55, 00:20:46.008 "qid": 0, 00:20:46.008 "state": "enabled", 00:20:46.008 "thread": "nvmf_tgt_poll_group_000", 00:20:46.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:46.008 "listen_address": { 00:20:46.008 "trtype": "TCP", 00:20:46.008 "adrfam": "IPv4", 00:20:46.008 "traddr": "10.0.0.2", 00:20:46.008 "trsvcid": "4420" 00:20:46.008 }, 00:20:46.008 "peer_address": { 00:20:46.008 "trtype": "TCP", 00:20:46.008 "adrfam": "IPv4", 00:20:46.008 "traddr": "10.0.0.1", 00:20:46.008 "trsvcid": "57398" 00:20:46.008 }, 00:20:46.008 "auth": { 00:20:46.008 "state": "completed", 00:20:46.008 "digest": "sha384", 00:20:46.008 "dhgroup": "null" 00:20:46.008 } 00:20:46.008 } 00:20:46.008 ]' 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.008 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.267 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:46.267 02:42:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:46.835 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.835 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:46.835 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.835 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.835 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.835 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:46.835 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.835 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:46.835 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:47.094 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:47.094 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.094 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.094 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:47.094 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:47.094 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.094 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.094 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.094 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.094 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.094 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.094 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.094 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.353 00:20:47.353 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.353 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.353 02:42:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.612 { 00:20:47.612 "cntlid": 57, 00:20:47.612 "qid": 0, 00:20:47.612 "state": "enabled", 00:20:47.612 "thread": "nvmf_tgt_poll_group_000", 00:20:47.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:47.612 "listen_address": { 00:20:47.612 "trtype": "TCP", 00:20:47.612 "adrfam": "IPv4", 00:20:47.612 "traddr": "10.0.0.2", 00:20:47.612 "trsvcid": "4420" 00:20:47.612 }, 00:20:47.612 "peer_address": { 00:20:47.612 "trtype": "TCP", 00:20:47.612 "adrfam": "IPv4", 00:20:47.612 "traddr": "10.0.0.1", 00:20:47.612 "trsvcid": "57412" 00:20:47.612 }, 00:20:47.612 "auth": { 00:20:47.612 "state": "completed", 00:20:47.612 "digest": "sha384", 00:20:47.612 "dhgroup": "ffdhe2048" 00:20:47.612 } 00:20:47.612 } 00:20:47.612 ]' 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.612 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.871 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:47.871 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:48.439 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.439 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.439 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.439 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.439 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.439 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.439 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:48.439 02:42:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:48.698 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:48.698 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.698 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.698 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:48.698 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:48.698 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.698 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.698 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.698 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.698 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.698 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.698 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.698 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.957 00:20:48.957 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:48.957 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:48.957 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.957 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.957 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.957 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.957 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.216 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.216 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.216 { 00:20:49.216 "cntlid": 59, 00:20:49.216 "qid": 0, 00:20:49.216 "state": "enabled", 00:20:49.216 "thread": "nvmf_tgt_poll_group_000", 00:20:49.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:49.216 "listen_address": { 00:20:49.216 "trtype": "TCP", 00:20:49.216 "adrfam": "IPv4", 00:20:49.216 "traddr": "10.0.0.2", 00:20:49.216 "trsvcid": "4420" 00:20:49.216 }, 00:20:49.216 "peer_address": { 00:20:49.216 "trtype": "TCP", 00:20:49.216 "adrfam": "IPv4", 00:20:49.216 "traddr": "10.0.0.1", 00:20:49.216 "trsvcid": "57434" 00:20:49.216 }, 00:20:49.216 "auth": { 00:20:49.216 "state": "completed", 00:20:49.216 "digest": "sha384", 00:20:49.216 "dhgroup": "ffdhe2048" 00:20:49.216 } 00:20:49.216 } 00:20:49.216 ]' 00:20:49.216 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.216 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.216 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.216 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:49.216 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.216 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.216 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.216 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.474 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:49.475 02:42:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:50.042 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.042 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:50.042 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.042 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.042 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.042 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.042 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.042 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:50.301 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:50.301 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.301 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.301 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:50.301 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:50.301 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.301 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.301 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.301 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.301 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.301 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.301 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.301 02:42:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.560 00:20:50.560 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:50.560 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:50.560 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.560 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.560 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.560 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.560 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.560 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.560 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:50.560 { 00:20:50.560 "cntlid": 61, 00:20:50.560 "qid": 0, 00:20:50.560 "state": "enabled", 00:20:50.560 "thread": "nvmf_tgt_poll_group_000", 00:20:50.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:50.560 "listen_address": { 00:20:50.560 "trtype": "TCP", 00:20:50.560 "adrfam": "IPv4", 00:20:50.560 "traddr": "10.0.0.2", 00:20:50.560 "trsvcid": "4420" 00:20:50.560 }, 00:20:50.560 "peer_address": { 00:20:50.560 "trtype": "TCP", 00:20:50.560 "adrfam": "IPv4", 00:20:50.560 "traddr": "10.0.0.1", 00:20:50.560 "trsvcid": "50582" 00:20:50.560 }, 00:20:50.560 "auth": { 00:20:50.560 "state": "completed", 00:20:50.560 "digest": "sha384", 00:20:50.560 "dhgroup": "ffdhe2048" 00:20:50.560 } 00:20:50.560 } 00:20:50.560 ]' 00:20:50.560 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.819 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.819 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.819 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:50.819 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.819 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.819 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.819 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.077 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:51.077 02:42:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:51.644 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.644 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.644 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:51.644 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.644 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.644 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.644 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.644 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.644 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:51.903 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:51.903 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.903 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:51.903 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:51.903 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:51.903 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.903 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:51.903 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.903 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.903 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.903 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:51.903 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.903 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:52.162 00:20:52.162 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.162 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.162 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.162 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.162 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.162 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.162 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.162 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.162 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.162 { 00:20:52.162 "cntlid": 63, 00:20:52.162 "qid": 0, 00:20:52.162 "state": "enabled", 00:20:52.162 "thread": "nvmf_tgt_poll_group_000", 00:20:52.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:52.162 "listen_address": { 00:20:52.162 "trtype": "TCP", 00:20:52.162 "adrfam": "IPv4", 00:20:52.162 "traddr": "10.0.0.2", 00:20:52.162 "trsvcid": "4420" 00:20:52.162 }, 00:20:52.162 "peer_address": { 00:20:52.162 "trtype": "TCP", 00:20:52.162 "adrfam": "IPv4", 00:20:52.162 "traddr": "10.0.0.1", 00:20:52.162 "trsvcid": "50604" 00:20:52.162 }, 00:20:52.162 "auth": { 00:20:52.162 "state": "completed", 00:20:52.162 "digest": "sha384", 00:20:52.162 "dhgroup": "ffdhe2048" 00:20:52.162 } 00:20:52.162 } 00:20:52.162 ]' 00:20:52.162 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.420 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.420 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.420 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:52.420 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.420 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.420 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.421 02:42:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.678 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:52.678 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:53.246 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.246 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.246 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.246 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.246 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.246 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:53.246 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.246 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.246 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:53.504 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:53.504 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.504 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.504 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:53.504 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:53.504 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.504 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.504 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.504 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.504 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.504 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.504 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.504 02:42:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.763 00:20:53.763 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:53.763 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:53.763 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.763 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.763 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.763 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.763 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.763 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.763 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:53.763 { 00:20:53.763 "cntlid": 65, 00:20:53.763 "qid": 0, 00:20:53.763 "state": "enabled", 00:20:53.763 "thread": "nvmf_tgt_poll_group_000", 00:20:53.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:53.763 "listen_address": { 00:20:53.763 "trtype": "TCP", 00:20:53.763 "adrfam": "IPv4", 00:20:53.763 "traddr": "10.0.0.2", 00:20:53.763 "trsvcid": "4420" 00:20:53.763 }, 00:20:53.763 "peer_address": { 00:20:53.763 "trtype": "TCP", 00:20:53.763 "adrfam": "IPv4", 00:20:53.763 "traddr": "10.0.0.1", 00:20:53.763 "trsvcid": "50628" 00:20:53.763 }, 00:20:53.763 "auth": { 00:20:53.763 "state": "completed", 00:20:53.763 "digest": "sha384", 00:20:53.763 "dhgroup": "ffdhe3072" 00:20:53.763 } 00:20:53.763 } 00:20:53.763 ]' 00:20:53.763 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.022 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.022 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.022 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:54.022 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.022 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.022 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.022 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.280 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:54.280 02:42:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:20:54.847 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.847 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.847 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:54.847 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.847 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.847 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.847 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:54.847 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:54.847 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:55.106 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:55.106 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.106 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.106 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:55.106 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:55.106 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.106 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.106 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.106 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.106 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.106 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.106 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.106 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.106 00:20:55.364 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.364 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.364 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.364 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.365 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.365 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.365 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.365 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.365 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.365 { 00:20:55.365 "cntlid": 67, 00:20:55.365 "qid": 0, 00:20:55.365 "state": "enabled", 00:20:55.365 "thread": "nvmf_tgt_poll_group_000", 00:20:55.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:55.365 "listen_address": { 00:20:55.365 "trtype": "TCP", 00:20:55.365 "adrfam": "IPv4", 00:20:55.365 "traddr": "10.0.0.2", 00:20:55.365 "trsvcid": "4420" 00:20:55.365 }, 00:20:55.365 "peer_address": { 00:20:55.365 "trtype": "TCP", 00:20:55.365 "adrfam": "IPv4", 00:20:55.365 "traddr": "10.0.0.1", 00:20:55.365 "trsvcid": "50644" 00:20:55.365 }, 00:20:55.365 "auth": { 00:20:55.365 "state": "completed", 00:20:55.365 "digest": "sha384", 00:20:55.365 "dhgroup": "ffdhe3072" 00:20:55.365 } 00:20:55.365 } 00:20:55.365 ]' 00:20:55.365 02:42:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:55.623 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.623 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:55.623 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:55.623 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:55.623 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.623 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.623 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.882 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:55.882 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:20:56.449 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.449 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.450 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.450 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.450 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.450 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.450 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:56.450 02:42:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:56.450 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:56.450 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.450 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.450 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:56.450 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:56.450 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.450 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.450 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.450 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.450 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.450 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.450 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.450 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.708 00:20:56.708 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.708 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.708 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.967 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.967 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.967 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.967 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.967 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.967 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.967 { 00:20:56.967 "cntlid": 69, 00:20:56.967 "qid": 0, 00:20:56.967 "state": "enabled", 00:20:56.967 "thread": "nvmf_tgt_poll_group_000", 00:20:56.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:56.967 "listen_address": { 00:20:56.967 "trtype": "TCP", 00:20:56.967 "adrfam": "IPv4", 00:20:56.967 "traddr": "10.0.0.2", 00:20:56.967 "trsvcid": "4420" 00:20:56.967 }, 00:20:56.967 "peer_address": { 00:20:56.967 "trtype": "TCP", 00:20:56.967 "adrfam": "IPv4", 00:20:56.967 "traddr": "10.0.0.1", 00:20:56.967 "trsvcid": "50664" 00:20:56.967 }, 00:20:56.967 "auth": { 00:20:56.967 "state": "completed", 00:20:56.967 "digest": "sha384", 00:20:56.967 "dhgroup": "ffdhe3072" 00:20:56.967 } 00:20:56.967 } 00:20:56.967 ]' 00:20:56.967 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.967 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.967 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.226 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:57.226 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.226 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.226 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.226 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.485 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:57.485 02:42:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.053 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.311 00:20:58.311 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.311 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.311 02:42:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.570 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.570 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.570 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.570 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.570 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.570 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:58.570 { 00:20:58.570 "cntlid": 71, 00:20:58.570 "qid": 0, 00:20:58.570 "state": "enabled", 00:20:58.570 "thread": "nvmf_tgt_poll_group_000", 00:20:58.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:58.570 "listen_address": { 00:20:58.570 "trtype": "TCP", 00:20:58.570 "adrfam": "IPv4", 00:20:58.570 "traddr": "10.0.0.2", 00:20:58.570 "trsvcid": "4420" 00:20:58.570 }, 00:20:58.570 "peer_address": { 00:20:58.570 "trtype": "TCP", 00:20:58.571 "adrfam": "IPv4", 00:20:58.571 "traddr": "10.0.0.1", 00:20:58.571 "trsvcid": "50680" 00:20:58.571 }, 00:20:58.571 "auth": { 00:20:58.571 "state": "completed", 00:20:58.571 "digest": "sha384", 00:20:58.571 "dhgroup": "ffdhe3072" 00:20:58.571 } 00:20:58.571 } 00:20:58.571 ]' 00:20:58.571 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:58.571 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.571 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:58.571 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:58.571 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:58.829 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.829 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.829 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.829 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:58.829 02:42:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:20:59.397 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.397 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.397 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.397 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.397 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.397 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.397 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.397 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.397 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.397 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.655 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:59.655 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:59.655 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:59.655 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:59.655 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:59.656 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.656 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.656 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.656 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.656 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.656 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.656 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.656 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.914 00:20:59.914 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.914 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.914 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.186 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.186 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.186 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.186 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.186 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.186 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.186 { 00:21:00.186 "cntlid": 73, 00:21:00.186 "qid": 0, 00:21:00.187 "state": "enabled", 00:21:00.187 "thread": "nvmf_tgt_poll_group_000", 00:21:00.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.187 "listen_address": { 00:21:00.187 "trtype": "TCP", 00:21:00.187 "adrfam": "IPv4", 00:21:00.187 "traddr": "10.0.0.2", 00:21:00.187 "trsvcid": "4420" 00:21:00.187 }, 00:21:00.187 "peer_address": { 00:21:00.187 "trtype": "TCP", 00:21:00.187 "adrfam": "IPv4", 00:21:00.187 "traddr": "10.0.0.1", 00:21:00.187 "trsvcid": "43314" 00:21:00.187 }, 00:21:00.187 "auth": { 00:21:00.187 "state": "completed", 00:21:00.187 "digest": "sha384", 00:21:00.187 "dhgroup": "ffdhe4096" 00:21:00.187 } 00:21:00.187 } 00:21:00.187 ]' 00:21:00.187 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.187 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.187 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.187 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:00.187 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.446 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.446 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.446 02:42:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.446 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:00.446 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:01.012 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.012 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.012 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.012 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.012 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.012 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.012 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.012 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:01.272 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:01.272 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.272 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.272 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:01.272 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:01.272 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.272 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.272 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.272 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.272 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.272 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.272 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.272 02:42:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.530 00:21:01.530 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.530 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.530 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.789 { 00:21:01.789 "cntlid": 75, 00:21:01.789 "qid": 0, 00:21:01.789 "state": "enabled", 00:21:01.789 "thread": "nvmf_tgt_poll_group_000", 00:21:01.789 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:01.789 "listen_address": { 00:21:01.789 "trtype": "TCP", 00:21:01.789 "adrfam": "IPv4", 00:21:01.789 "traddr": "10.0.0.2", 00:21:01.789 "trsvcid": "4420" 00:21:01.789 }, 00:21:01.789 "peer_address": { 00:21:01.789 "trtype": "TCP", 00:21:01.789 "adrfam": "IPv4", 00:21:01.789 "traddr": "10.0.0.1", 00:21:01.789 "trsvcid": "43346" 00:21:01.789 }, 00:21:01.789 "auth": { 00:21:01.789 "state": "completed", 00:21:01.789 "digest": "sha384", 00:21:01.789 "dhgroup": "ffdhe4096" 00:21:01.789 } 00:21:01.789 } 00:21:01.789 ]' 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.789 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.048 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:02.048 02:42:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:02.615 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.615 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:02.615 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.615 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.615 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.615 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:02.615 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.615 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:02.873 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:02.873 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:02.873 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:02.873 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:02.873 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:02.873 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:02.873 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.873 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.873 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.873 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.873 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.873 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.873 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.131 00:21:03.131 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.131 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.131 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.390 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.390 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.390 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.390 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.390 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.390 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.390 { 00:21:03.390 "cntlid": 77, 00:21:03.390 "qid": 0, 00:21:03.390 "state": "enabled", 00:21:03.390 "thread": "nvmf_tgt_poll_group_000", 00:21:03.390 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:03.390 "listen_address": { 00:21:03.390 "trtype": "TCP", 00:21:03.390 "adrfam": "IPv4", 00:21:03.390 "traddr": "10.0.0.2", 00:21:03.390 "trsvcid": "4420" 00:21:03.390 }, 00:21:03.390 "peer_address": { 00:21:03.390 "trtype": "TCP", 00:21:03.390 "adrfam": "IPv4", 00:21:03.390 "traddr": "10.0.0.1", 00:21:03.390 "trsvcid": "43372" 00:21:03.390 }, 00:21:03.390 "auth": { 00:21:03.390 "state": "completed", 00:21:03.390 "digest": "sha384", 00:21:03.390 "dhgroup": "ffdhe4096" 00:21:03.390 } 00:21:03.390 } 00:21:03.390 ]' 00:21:03.390 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.390 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.390 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.390 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:03.390 02:42:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.390 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.390 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.390 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.648 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:03.648 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:04.215 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.215 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:04.215 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.215 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.215 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.215 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.215 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.215 02:42:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:04.473 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:04.473 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:04.473 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:04.473 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:04.473 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:04.473 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.473 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:04.473 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.473 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.473 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.473 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:04.473 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.473 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.732 00:21:04.732 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.732 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.732 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.990 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.990 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.990 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.990 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.990 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.990 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.990 { 00:21:04.990 "cntlid": 79, 00:21:04.990 "qid": 0, 00:21:04.990 "state": "enabled", 00:21:04.990 "thread": "nvmf_tgt_poll_group_000", 00:21:04.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:04.990 "listen_address": { 00:21:04.990 "trtype": "TCP", 00:21:04.990 "adrfam": "IPv4", 00:21:04.990 "traddr": "10.0.0.2", 00:21:04.990 "trsvcid": "4420" 00:21:04.990 }, 00:21:04.990 "peer_address": { 00:21:04.990 "trtype": "TCP", 00:21:04.990 "adrfam": "IPv4", 00:21:04.990 "traddr": "10.0.0.1", 00:21:04.990 "trsvcid": "43404" 00:21:04.990 }, 00:21:04.990 "auth": { 00:21:04.990 "state": "completed", 00:21:04.990 "digest": "sha384", 00:21:04.990 "dhgroup": "ffdhe4096" 00:21:04.990 } 00:21:04.990 } 00:21:04.990 ]' 00:21:04.990 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.990 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.990 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.990 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:04.991 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.991 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.991 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.991 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.249 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:05.249 02:42:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:05.816 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.816 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:05.816 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.816 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.816 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.816 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.816 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.816 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:05.816 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:06.075 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:06.075 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.075 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.075 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:06.075 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:06.075 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.075 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.075 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.075 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.075 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.075 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.075 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.075 02:42:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.333 00:21:06.592 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.592 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.592 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.592 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.592 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.592 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.592 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.592 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.592 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.592 { 00:21:06.592 "cntlid": 81, 00:21:06.592 "qid": 0, 00:21:06.592 "state": "enabled", 00:21:06.592 "thread": "nvmf_tgt_poll_group_000", 00:21:06.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:06.592 "listen_address": { 00:21:06.592 "trtype": "TCP", 00:21:06.592 "adrfam": "IPv4", 00:21:06.592 "traddr": "10.0.0.2", 00:21:06.592 "trsvcid": "4420" 00:21:06.592 }, 00:21:06.592 "peer_address": { 00:21:06.592 "trtype": "TCP", 00:21:06.592 "adrfam": "IPv4", 00:21:06.592 "traddr": "10.0.0.1", 00:21:06.592 "trsvcid": "43420" 00:21:06.592 }, 00:21:06.593 "auth": { 00:21:06.593 "state": "completed", 00:21:06.593 "digest": "sha384", 00:21:06.593 "dhgroup": "ffdhe6144" 00:21:06.593 } 00:21:06.593 } 00:21:06.593 ]' 00:21:06.593 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.593 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.593 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.851 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:06.851 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.851 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.851 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.851 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.110 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:07.110 02:42:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.677 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.677 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.678 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.678 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.678 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.244 00:21:08.244 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.244 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.244 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.245 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.245 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.245 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.245 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.245 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.245 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.245 { 00:21:08.245 "cntlid": 83, 00:21:08.245 "qid": 0, 00:21:08.245 "state": "enabled", 00:21:08.245 "thread": "nvmf_tgt_poll_group_000", 00:21:08.245 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:08.245 "listen_address": { 00:21:08.245 "trtype": "TCP", 00:21:08.245 "adrfam": "IPv4", 00:21:08.245 "traddr": "10.0.0.2", 00:21:08.245 "trsvcid": "4420" 00:21:08.245 }, 00:21:08.245 "peer_address": { 00:21:08.245 "trtype": "TCP", 00:21:08.245 "adrfam": "IPv4", 00:21:08.245 "traddr": "10.0.0.1", 00:21:08.245 "trsvcid": "43446" 00:21:08.245 }, 00:21:08.245 "auth": { 00:21:08.245 "state": "completed", 00:21:08.245 "digest": "sha384", 00:21:08.245 "dhgroup": "ffdhe6144" 00:21:08.245 } 00:21:08.245 } 00:21:08.245 ]' 00:21:08.245 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.245 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.245 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.503 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:08.503 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.503 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.503 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.503 02:42:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.503 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:08.503 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:09.071 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.071 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.071 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.071 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.329 02:42:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.895 00:21:09.895 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.895 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.895 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.895 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.895 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.895 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.895 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.895 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.895 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.895 { 00:21:09.895 "cntlid": 85, 00:21:09.895 "qid": 0, 00:21:09.895 "state": "enabled", 00:21:09.895 "thread": "nvmf_tgt_poll_group_000", 00:21:09.895 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:09.895 "listen_address": { 00:21:09.895 "trtype": "TCP", 00:21:09.895 "adrfam": "IPv4", 00:21:09.895 "traddr": "10.0.0.2", 00:21:09.895 "trsvcid": "4420" 00:21:09.895 }, 00:21:09.895 "peer_address": { 00:21:09.895 "trtype": "TCP", 00:21:09.895 "adrfam": "IPv4", 00:21:09.895 "traddr": "10.0.0.1", 00:21:09.895 "trsvcid": "33778" 00:21:09.895 }, 00:21:09.895 "auth": { 00:21:09.895 "state": "completed", 00:21:09.895 "digest": "sha384", 00:21:09.895 "dhgroup": "ffdhe6144" 00:21:09.895 } 00:21:09.895 } 00:21:09.895 ]' 00:21:09.896 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.153 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.153 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.153 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:10.153 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.153 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.153 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.153 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.412 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:10.412 02:42:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.979 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.546 00:21:11.546 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.547 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.547 02:42:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.547 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.547 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.547 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.547 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.547 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.805 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.805 { 00:21:11.805 "cntlid": 87, 00:21:11.805 "qid": 0, 00:21:11.805 "state": "enabled", 00:21:11.805 "thread": "nvmf_tgt_poll_group_000", 00:21:11.805 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:11.805 "listen_address": { 00:21:11.805 "trtype": "TCP", 00:21:11.805 "adrfam": "IPv4", 00:21:11.805 "traddr": "10.0.0.2", 00:21:11.805 "trsvcid": "4420" 00:21:11.805 }, 00:21:11.805 "peer_address": { 00:21:11.805 "trtype": "TCP", 00:21:11.805 "adrfam": "IPv4", 00:21:11.805 "traddr": "10.0.0.1", 00:21:11.805 "trsvcid": "33808" 00:21:11.805 }, 00:21:11.805 "auth": { 00:21:11.805 "state": "completed", 00:21:11.805 "digest": "sha384", 00:21:11.805 "dhgroup": "ffdhe6144" 00:21:11.805 } 00:21:11.805 } 00:21:11.805 ]' 00:21:11.805 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.805 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.805 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.805 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:11.805 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.805 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.805 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.805 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.064 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:12.064 02:42:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:12.631 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.631 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:12.631 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.631 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.631 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.631 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.631 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.631 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.631 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.890 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:12.890 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.890 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.890 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:12.890 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.890 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.890 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.890 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.890 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.890 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.890 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.890 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.890 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:13.149 00:21:13.149 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.149 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.149 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.408 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.408 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.408 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.408 02:42:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.408 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.408 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.408 { 00:21:13.408 "cntlid": 89, 00:21:13.408 "qid": 0, 00:21:13.409 "state": "enabled", 00:21:13.409 "thread": "nvmf_tgt_poll_group_000", 00:21:13.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.409 "listen_address": { 00:21:13.409 "trtype": "TCP", 00:21:13.409 "adrfam": "IPv4", 00:21:13.409 "traddr": "10.0.0.2", 00:21:13.409 "trsvcid": "4420" 00:21:13.409 }, 00:21:13.409 "peer_address": { 00:21:13.409 "trtype": "TCP", 00:21:13.409 "adrfam": "IPv4", 00:21:13.409 "traddr": "10.0.0.1", 00:21:13.409 "trsvcid": "33830" 00:21:13.409 }, 00:21:13.409 "auth": { 00:21:13.409 "state": "completed", 00:21:13.409 "digest": "sha384", 00:21:13.409 "dhgroup": "ffdhe8192" 00:21:13.409 } 00:21:13.409 } 00:21:13.409 ]' 00:21:13.409 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.409 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.409 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.667 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:13.667 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.667 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.667 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.667 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.926 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:13.926 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:14.585 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.585 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.585 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.585 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.585 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.585 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.585 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:14.585 02:42:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:14.585 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:14.585 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.585 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.585 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:14.585 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.585 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.585 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.585 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.585 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.585 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.585 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.585 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.585 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.924 00:21:15.183 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:15.183 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:15.183 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.183 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.183 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.183 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.183 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.183 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.183 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:15.183 { 00:21:15.183 "cntlid": 91, 00:21:15.183 "qid": 0, 00:21:15.183 "state": "enabled", 00:21:15.183 "thread": "nvmf_tgt_poll_group_000", 00:21:15.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:15.183 "listen_address": { 00:21:15.183 "trtype": "TCP", 00:21:15.183 "adrfam": "IPv4", 00:21:15.183 "traddr": "10.0.0.2", 00:21:15.183 "trsvcid": "4420" 00:21:15.183 }, 00:21:15.183 "peer_address": { 00:21:15.183 "trtype": "TCP", 00:21:15.183 "adrfam": "IPv4", 00:21:15.183 "traddr": "10.0.0.1", 00:21:15.183 "trsvcid": "33848" 00:21:15.183 }, 00:21:15.183 "auth": { 00:21:15.183 "state": "completed", 00:21:15.183 "digest": "sha384", 00:21:15.183 "dhgroup": "ffdhe8192" 00:21:15.183 } 00:21:15.183 } 00:21:15.183 ]' 00:21:15.183 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:15.442 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:15.442 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.442 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:15.442 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.442 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.442 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.442 02:42:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.700 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:15.700 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.267 02:42:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.834 00:21:16.834 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.834 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.834 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.093 { 00:21:17.093 "cntlid": 93, 00:21:17.093 "qid": 0, 00:21:17.093 "state": "enabled", 00:21:17.093 "thread": "nvmf_tgt_poll_group_000", 00:21:17.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:17.093 "listen_address": { 00:21:17.093 "trtype": "TCP", 00:21:17.093 "adrfam": "IPv4", 00:21:17.093 "traddr": "10.0.0.2", 00:21:17.093 "trsvcid": "4420" 00:21:17.093 }, 00:21:17.093 "peer_address": { 00:21:17.093 "trtype": "TCP", 00:21:17.093 "adrfam": "IPv4", 00:21:17.093 "traddr": "10.0.0.1", 00:21:17.093 "trsvcid": "33866" 00:21:17.093 }, 00:21:17.093 "auth": { 00:21:17.093 "state": "completed", 00:21:17.093 "digest": "sha384", 00:21:17.093 "dhgroup": "ffdhe8192" 00:21:17.093 } 00:21:17.093 } 00:21:17.093 ]' 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.093 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.352 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:17.352 02:42:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:17.919 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.919 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.919 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.919 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.919 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.919 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.919 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.919 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:17.919 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:18.178 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:18.178 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.178 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:18.178 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:18.178 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:18.178 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.178 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:18.178 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.178 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.178 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.179 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:18.179 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.179 02:42:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.745 00:21:18.745 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.745 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.745 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.004 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.004 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.004 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.004 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.004 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.004 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.004 { 00:21:19.004 "cntlid": 95, 00:21:19.004 "qid": 0, 00:21:19.004 "state": "enabled", 00:21:19.004 "thread": "nvmf_tgt_poll_group_000", 00:21:19.004 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.004 "listen_address": { 00:21:19.004 "trtype": "TCP", 00:21:19.004 "adrfam": "IPv4", 00:21:19.004 "traddr": "10.0.0.2", 00:21:19.004 "trsvcid": "4420" 00:21:19.004 }, 00:21:19.004 "peer_address": { 00:21:19.004 "trtype": "TCP", 00:21:19.004 "adrfam": "IPv4", 00:21:19.004 "traddr": "10.0.0.1", 00:21:19.004 "trsvcid": "33896" 00:21:19.004 }, 00:21:19.004 "auth": { 00:21:19.004 "state": "completed", 00:21:19.004 "digest": "sha384", 00:21:19.004 "dhgroup": "ffdhe8192" 00:21:19.004 } 00:21:19.004 } 00:21:19.004 ]' 00:21:19.004 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.004 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:19.004 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.005 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:19.005 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.005 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.005 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.005 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.263 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:19.263 02:42:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:19.830 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.830 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:19.830 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.830 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.830 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.830 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:19.830 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:19.830 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.830 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:19.830 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:20.089 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:20.089 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.089 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.089 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:20.089 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:20.089 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.089 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.089 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.089 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.089 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.089 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.089 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.089 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.348 00:21:20.348 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.348 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.348 02:42:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.606 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.606 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.606 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.606 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.606 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.606 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.606 { 00:21:20.606 "cntlid": 97, 00:21:20.606 "qid": 0, 00:21:20.606 "state": "enabled", 00:21:20.606 "thread": "nvmf_tgt_poll_group_000", 00:21:20.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:20.606 "listen_address": { 00:21:20.606 "trtype": "TCP", 00:21:20.606 "adrfam": "IPv4", 00:21:20.607 "traddr": "10.0.0.2", 00:21:20.607 "trsvcid": "4420" 00:21:20.607 }, 00:21:20.607 "peer_address": { 00:21:20.607 "trtype": "TCP", 00:21:20.607 "adrfam": "IPv4", 00:21:20.607 "traddr": "10.0.0.1", 00:21:20.607 "trsvcid": "34058" 00:21:20.607 }, 00:21:20.607 "auth": { 00:21:20.607 "state": "completed", 00:21:20.607 "digest": "sha512", 00:21:20.607 "dhgroup": "null" 00:21:20.607 } 00:21:20.607 } 00:21:20.607 ]' 00:21:20.607 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.607 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.607 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.607 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:20.607 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.607 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.607 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.607 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.865 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:20.865 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:21.433 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.433 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:21.433 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.433 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.433 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.433 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.433 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.433 02:42:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.691 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:21.691 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.691 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:21.691 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:21.691 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:21.691 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.691 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.691 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.691 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.691 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.691 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.691 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.691 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:21.950 00:21:21.950 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.950 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.950 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.950 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.950 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.950 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.950 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.209 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.209 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.209 { 00:21:22.209 "cntlid": 99, 00:21:22.209 "qid": 0, 00:21:22.209 "state": "enabled", 00:21:22.209 "thread": "nvmf_tgt_poll_group_000", 00:21:22.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:22.209 "listen_address": { 00:21:22.209 "trtype": "TCP", 00:21:22.209 "adrfam": "IPv4", 00:21:22.209 "traddr": "10.0.0.2", 00:21:22.209 "trsvcid": "4420" 00:21:22.209 }, 00:21:22.209 "peer_address": { 00:21:22.209 "trtype": "TCP", 00:21:22.209 "adrfam": "IPv4", 00:21:22.209 "traddr": "10.0.0.1", 00:21:22.209 "trsvcid": "34092" 00:21:22.209 }, 00:21:22.209 "auth": { 00:21:22.209 "state": "completed", 00:21:22.209 "digest": "sha512", 00:21:22.209 "dhgroup": "null" 00:21:22.209 } 00:21:22.209 } 00:21:22.209 ]' 00:21:22.209 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.209 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.209 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.209 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:22.209 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.209 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.209 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.209 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.468 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:22.468 02:42:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:23.036 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.036 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:23.036 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.036 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.036 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.036 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.036 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.036 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:23.036 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:23.295 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.295 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.295 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:23.295 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:23.295 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.295 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.295 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.295 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.295 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.295 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.295 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.295 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:23.295 00:21:23.554 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.554 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.554 02:42:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.554 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.554 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.554 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.554 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.554 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.554 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.554 { 00:21:23.554 "cntlid": 101, 00:21:23.554 "qid": 0, 00:21:23.554 "state": "enabled", 00:21:23.554 "thread": "nvmf_tgt_poll_group_000", 00:21:23.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:23.554 "listen_address": { 00:21:23.554 "trtype": "TCP", 00:21:23.554 "adrfam": "IPv4", 00:21:23.554 "traddr": "10.0.0.2", 00:21:23.554 "trsvcid": "4420" 00:21:23.554 }, 00:21:23.554 "peer_address": { 00:21:23.554 "trtype": "TCP", 00:21:23.554 "adrfam": "IPv4", 00:21:23.554 "traddr": "10.0.0.1", 00:21:23.554 "trsvcid": "34100" 00:21:23.554 }, 00:21:23.554 "auth": { 00:21:23.554 "state": "completed", 00:21:23.554 "digest": "sha512", 00:21:23.554 "dhgroup": "null" 00:21:23.554 } 00:21:23.554 } 00:21:23.554 ]' 00:21:23.554 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.554 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.554 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.813 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:23.813 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.813 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.813 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.813 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.071 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:24.071 02:42:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.638 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:24.896 00:21:24.896 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:24.896 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:24.896 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.155 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.155 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.155 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.155 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.155 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.155 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.155 { 00:21:25.155 "cntlid": 103, 00:21:25.155 "qid": 0, 00:21:25.155 "state": "enabled", 00:21:25.155 "thread": "nvmf_tgt_poll_group_000", 00:21:25.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:25.155 "listen_address": { 00:21:25.155 "trtype": "TCP", 00:21:25.155 "adrfam": "IPv4", 00:21:25.155 "traddr": "10.0.0.2", 00:21:25.155 "trsvcid": "4420" 00:21:25.155 }, 00:21:25.155 "peer_address": { 00:21:25.155 "trtype": "TCP", 00:21:25.155 "adrfam": "IPv4", 00:21:25.155 "traddr": "10.0.0.1", 00:21:25.155 "trsvcid": "34120" 00:21:25.155 }, 00:21:25.155 "auth": { 00:21:25.155 "state": "completed", 00:21:25.155 "digest": "sha512", 00:21:25.155 "dhgroup": "null" 00:21:25.155 } 00:21:25.155 } 00:21:25.155 ]' 00:21:25.155 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.155 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.155 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.413 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:25.413 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.413 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.413 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.413 02:42:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.413 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:25.413 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:25.980 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:25.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:25.980 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:25.980 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.980 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.980 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.980 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:25.980 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:25.980 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:25.980 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:26.238 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:26.238 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.238 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.238 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:26.238 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:26.238 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.238 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.238 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.238 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.238 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.238 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.238 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.238 02:42:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.496 00:21:26.496 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.496 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.496 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.754 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.754 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.754 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.754 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.754 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.754 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:26.754 { 00:21:26.754 "cntlid": 105, 00:21:26.754 "qid": 0, 00:21:26.754 "state": "enabled", 00:21:26.754 "thread": "nvmf_tgt_poll_group_000", 00:21:26.754 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:26.754 "listen_address": { 00:21:26.754 "trtype": "TCP", 00:21:26.754 "adrfam": "IPv4", 00:21:26.754 "traddr": "10.0.0.2", 00:21:26.754 "trsvcid": "4420" 00:21:26.754 }, 00:21:26.754 "peer_address": { 00:21:26.754 "trtype": "TCP", 00:21:26.754 "adrfam": "IPv4", 00:21:26.754 "traddr": "10.0.0.1", 00:21:26.754 "trsvcid": "34144" 00:21:26.754 }, 00:21:26.754 "auth": { 00:21:26.754 "state": "completed", 00:21:26.754 "digest": "sha512", 00:21:26.754 "dhgroup": "ffdhe2048" 00:21:26.754 } 00:21:26.754 } 00:21:26.754 ]' 00:21:26.754 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:26.754 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.754 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:26.754 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:26.754 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.013 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.013 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.013 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.013 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:27.013 02:42:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:27.580 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.580 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:27.580 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.580 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.580 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.580 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:27.580 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:27.580 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:27.839 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:27.839 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.839 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:27.839 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:27.839 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:27.839 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.839 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.839 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.839 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.839 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.839 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.839 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.839 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.098 00:21:28.098 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.098 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.098 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.357 { 00:21:28.357 "cntlid": 107, 00:21:28.357 "qid": 0, 00:21:28.357 "state": "enabled", 00:21:28.357 "thread": "nvmf_tgt_poll_group_000", 00:21:28.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:28.357 "listen_address": { 00:21:28.357 "trtype": "TCP", 00:21:28.357 "adrfam": "IPv4", 00:21:28.357 "traddr": "10.0.0.2", 00:21:28.357 "trsvcid": "4420" 00:21:28.357 }, 00:21:28.357 "peer_address": { 00:21:28.357 "trtype": "TCP", 00:21:28.357 "adrfam": "IPv4", 00:21:28.357 "traddr": "10.0.0.1", 00:21:28.357 "trsvcid": "34162" 00:21:28.357 }, 00:21:28.357 "auth": { 00:21:28.357 "state": "completed", 00:21:28.357 "digest": "sha512", 00:21:28.357 "dhgroup": "ffdhe2048" 00:21:28.357 } 00:21:28.357 } 00:21:28.357 ]' 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.357 02:42:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.616 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:28.616 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:29.182 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.182 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:29.182 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.182 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.182 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.182 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.182 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.182 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:29.441 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:29.441 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.441 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.441 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:29.441 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:29.441 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.441 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.441 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.441 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.441 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.441 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.441 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.441 02:42:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.700 00:21:29.700 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.700 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.700 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.958 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.958 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.958 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.958 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.958 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.958 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.958 { 00:21:29.958 "cntlid": 109, 00:21:29.958 "qid": 0, 00:21:29.958 "state": "enabled", 00:21:29.958 "thread": "nvmf_tgt_poll_group_000", 00:21:29.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:29.958 "listen_address": { 00:21:29.958 "trtype": "TCP", 00:21:29.958 "adrfam": "IPv4", 00:21:29.958 "traddr": "10.0.0.2", 00:21:29.958 "trsvcid": "4420" 00:21:29.958 }, 00:21:29.958 "peer_address": { 00:21:29.958 "trtype": "TCP", 00:21:29.958 "adrfam": "IPv4", 00:21:29.958 "traddr": "10.0.0.1", 00:21:29.958 "trsvcid": "46824" 00:21:29.958 }, 00:21:29.958 "auth": { 00:21:29.958 "state": "completed", 00:21:29.958 "digest": "sha512", 00:21:29.958 "dhgroup": "ffdhe2048" 00:21:29.958 } 00:21:29.958 } 00:21:29.958 ]' 00:21:29.959 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.959 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.959 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:29.959 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:29.959 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.959 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.959 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.959 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.217 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:30.217 02:43:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:30.785 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.785 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:30.785 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.785 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.785 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.785 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:30.785 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.785 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:31.044 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:31.044 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.044 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.044 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:31.044 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:31.044 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.044 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:31.044 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.044 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.044 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.044 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:31.044 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.044 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:31.303 00:21:31.303 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.303 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.303 02:43:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.562 { 00:21:31.562 "cntlid": 111, 00:21:31.562 "qid": 0, 00:21:31.562 "state": "enabled", 00:21:31.562 "thread": "nvmf_tgt_poll_group_000", 00:21:31.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:31.562 "listen_address": { 00:21:31.562 "trtype": "TCP", 00:21:31.562 "adrfam": "IPv4", 00:21:31.562 "traddr": "10.0.0.2", 00:21:31.562 "trsvcid": "4420" 00:21:31.562 }, 00:21:31.562 "peer_address": { 00:21:31.562 "trtype": "TCP", 00:21:31.562 "adrfam": "IPv4", 00:21:31.562 "traddr": "10.0.0.1", 00:21:31.562 "trsvcid": "46852" 00:21:31.562 }, 00:21:31.562 "auth": { 00:21:31.562 "state": "completed", 00:21:31.562 "digest": "sha512", 00:21:31.562 "dhgroup": "ffdhe2048" 00:21:31.562 } 00:21:31.562 } 00:21:31.562 ]' 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.562 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.820 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:31.820 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:32.388 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.388 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:32.388 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.388 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.388 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.388 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.388 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.388 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:32.388 02:43:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:32.646 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:32.646 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:32.646 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:32.646 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:32.646 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:32.646 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:32.646 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.646 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.646 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.646 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.646 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.647 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.647 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:32.905 00:21:32.905 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:32.905 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:32.905 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.164 { 00:21:33.164 "cntlid": 113, 00:21:33.164 "qid": 0, 00:21:33.164 "state": "enabled", 00:21:33.164 "thread": "nvmf_tgt_poll_group_000", 00:21:33.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:33.164 "listen_address": { 00:21:33.164 "trtype": "TCP", 00:21:33.164 "adrfam": "IPv4", 00:21:33.164 "traddr": "10.0.0.2", 00:21:33.164 "trsvcid": "4420" 00:21:33.164 }, 00:21:33.164 "peer_address": { 00:21:33.164 "trtype": "TCP", 00:21:33.164 "adrfam": "IPv4", 00:21:33.164 "traddr": "10.0.0.1", 00:21:33.164 "trsvcid": "46894" 00:21:33.164 }, 00:21:33.164 "auth": { 00:21:33.164 "state": "completed", 00:21:33.164 "digest": "sha512", 00:21:33.164 "dhgroup": "ffdhe3072" 00:21:33.164 } 00:21:33.164 } 00:21:33.164 ]' 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.164 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.423 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:33.423 02:43:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:33.991 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.991 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:33.991 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.991 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.991 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.991 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.991 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:33.991 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:34.250 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:34.250 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.250 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.250 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:34.250 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:34.250 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.250 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.250 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.250 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.250 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.250 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.250 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.250 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:34.508 00:21:34.508 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.508 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.508 02:43:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.767 { 00:21:34.767 "cntlid": 115, 00:21:34.767 "qid": 0, 00:21:34.767 "state": "enabled", 00:21:34.767 "thread": "nvmf_tgt_poll_group_000", 00:21:34.767 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:34.767 "listen_address": { 00:21:34.767 "trtype": "TCP", 00:21:34.767 "adrfam": "IPv4", 00:21:34.767 "traddr": "10.0.0.2", 00:21:34.767 "trsvcid": "4420" 00:21:34.767 }, 00:21:34.767 "peer_address": { 00:21:34.767 "trtype": "TCP", 00:21:34.767 "adrfam": "IPv4", 00:21:34.767 "traddr": "10.0.0.1", 00:21:34.767 "trsvcid": "46924" 00:21:34.767 }, 00:21:34.767 "auth": { 00:21:34.767 "state": "completed", 00:21:34.767 "digest": "sha512", 00:21:34.767 "dhgroup": "ffdhe3072" 00:21:34.767 } 00:21:34.767 } 00:21:34.767 ]' 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.767 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.025 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:35.025 02:43:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:35.592 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.592 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:35.592 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.592 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.592 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.592 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.592 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.592 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.851 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:35.851 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:35.851 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:35.851 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:35.851 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:35.851 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.851 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.851 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.851 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.851 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.851 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.851 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:35.851 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:36.109 00:21:36.109 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.109 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.109 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.109 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.109 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.109 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.109 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.109 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.110 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.110 { 00:21:36.110 "cntlid": 117, 00:21:36.110 "qid": 0, 00:21:36.110 "state": "enabled", 00:21:36.110 "thread": "nvmf_tgt_poll_group_000", 00:21:36.110 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:36.110 "listen_address": { 00:21:36.110 "trtype": "TCP", 00:21:36.110 "adrfam": "IPv4", 00:21:36.110 "traddr": "10.0.0.2", 00:21:36.110 "trsvcid": "4420" 00:21:36.110 }, 00:21:36.110 "peer_address": { 00:21:36.110 "trtype": "TCP", 00:21:36.110 "adrfam": "IPv4", 00:21:36.110 "traddr": "10.0.0.1", 00:21:36.110 "trsvcid": "46940" 00:21:36.110 }, 00:21:36.110 "auth": { 00:21:36.110 "state": "completed", 00:21:36.110 "digest": "sha512", 00:21:36.110 "dhgroup": "ffdhe3072" 00:21:36.110 } 00:21:36.110 } 00:21:36.110 ]' 00:21:36.368 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.368 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.368 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.368 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:36.368 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.368 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.368 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.368 02:43:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.627 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:36.627 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.194 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.452 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.453 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:37.453 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.453 02:43:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:37.453 00:21:37.711 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.711 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.711 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.711 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.711 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.711 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.711 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.711 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.711 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:37.711 { 00:21:37.711 "cntlid": 119, 00:21:37.711 "qid": 0, 00:21:37.711 "state": "enabled", 00:21:37.711 "thread": "nvmf_tgt_poll_group_000", 00:21:37.711 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:37.711 "listen_address": { 00:21:37.711 "trtype": "TCP", 00:21:37.711 "adrfam": "IPv4", 00:21:37.711 "traddr": "10.0.0.2", 00:21:37.711 "trsvcid": "4420" 00:21:37.711 }, 00:21:37.711 "peer_address": { 00:21:37.711 "trtype": "TCP", 00:21:37.711 "adrfam": "IPv4", 00:21:37.711 "traddr": "10.0.0.1", 00:21:37.711 "trsvcid": "46970" 00:21:37.711 }, 00:21:37.711 "auth": { 00:21:37.711 "state": "completed", 00:21:37.711 "digest": "sha512", 00:21:37.711 "dhgroup": "ffdhe3072" 00:21:37.711 } 00:21:37.711 } 00:21:37.711 ]' 00:21:37.711 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:37.970 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.970 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:37.970 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:37.970 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:37.970 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.970 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.970 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.229 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:38.229 02:43:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:38.796 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.796 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:38.796 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.796 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.796 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.796 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:38.796 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:38.796 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.797 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:38.797 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:38.797 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.797 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:38.797 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:38.797 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.797 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.797 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.797 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.797 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.797 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.055 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.055 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.055 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:39.314 00:21:39.314 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.314 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.314 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.314 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.314 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.314 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.314 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.314 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.314 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.314 { 00:21:39.314 "cntlid": 121, 00:21:39.314 "qid": 0, 00:21:39.314 "state": "enabled", 00:21:39.314 "thread": "nvmf_tgt_poll_group_000", 00:21:39.314 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:39.314 "listen_address": { 00:21:39.314 "trtype": "TCP", 00:21:39.314 "adrfam": "IPv4", 00:21:39.314 "traddr": "10.0.0.2", 00:21:39.314 "trsvcid": "4420" 00:21:39.314 }, 00:21:39.314 "peer_address": { 00:21:39.314 "trtype": "TCP", 00:21:39.314 "adrfam": "IPv4", 00:21:39.314 "traddr": "10.0.0.1", 00:21:39.314 "trsvcid": "46978" 00:21:39.314 }, 00:21:39.314 "auth": { 00:21:39.314 "state": "completed", 00:21:39.314 "digest": "sha512", 00:21:39.314 "dhgroup": "ffdhe4096" 00:21:39.314 } 00:21:39.314 } 00:21:39.314 ]' 00:21:39.314 02:43:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.573 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.573 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.573 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:39.573 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.573 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.573 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.573 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.832 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:39.832 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:40.400 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.400 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:40.400 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.400 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.400 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.400 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.400 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.400 02:43:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:40.659 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:40.659 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.659 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.659 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:40.659 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:40.659 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.659 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.659 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.659 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.659 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.659 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.659 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.659 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.917 00:21:40.918 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:40.918 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:40.918 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.918 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.918 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.918 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.918 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.918 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.918 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.918 { 00:21:40.918 "cntlid": 123, 00:21:40.918 "qid": 0, 00:21:40.918 "state": "enabled", 00:21:40.918 "thread": "nvmf_tgt_poll_group_000", 00:21:40.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:40.918 "listen_address": { 00:21:40.918 "trtype": "TCP", 00:21:40.918 "adrfam": "IPv4", 00:21:40.918 "traddr": "10.0.0.2", 00:21:40.918 "trsvcid": "4420" 00:21:40.918 }, 00:21:40.918 "peer_address": { 00:21:40.918 "trtype": "TCP", 00:21:40.918 "adrfam": "IPv4", 00:21:40.918 "traddr": "10.0.0.1", 00:21:40.918 "trsvcid": "37872" 00:21:40.918 }, 00:21:40.918 "auth": { 00:21:40.918 "state": "completed", 00:21:40.918 "digest": "sha512", 00:21:40.918 "dhgroup": "ffdhe4096" 00:21:40.918 } 00:21:40.918 } 00:21:40.918 ]' 00:21:40.918 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.176 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.176 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.176 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:41.176 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.176 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.176 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.176 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.435 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:41.435 02:43:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.003 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.261 00:21:42.520 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.520 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.520 02:43:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.520 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.520 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.520 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.520 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.520 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.520 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.520 { 00:21:42.520 "cntlid": 125, 00:21:42.520 "qid": 0, 00:21:42.520 "state": "enabled", 00:21:42.520 "thread": "nvmf_tgt_poll_group_000", 00:21:42.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:42.520 "listen_address": { 00:21:42.520 "trtype": "TCP", 00:21:42.520 "adrfam": "IPv4", 00:21:42.520 "traddr": "10.0.0.2", 00:21:42.520 "trsvcid": "4420" 00:21:42.520 }, 00:21:42.520 "peer_address": { 00:21:42.520 "trtype": "TCP", 00:21:42.520 "adrfam": "IPv4", 00:21:42.520 "traddr": "10.0.0.1", 00:21:42.520 "trsvcid": "37894" 00:21:42.520 }, 00:21:42.520 "auth": { 00:21:42.520 "state": "completed", 00:21:42.520 "digest": "sha512", 00:21:42.520 "dhgroup": "ffdhe4096" 00:21:42.520 } 00:21:42.520 } 00:21:42.520 ]' 00:21:42.520 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.520 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.779 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.779 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:42.779 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.779 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.779 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.779 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.037 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:43.037 02:43:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.604 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.863 00:21:44.121 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.121 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.121 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.121 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.121 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.121 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.121 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.121 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.121 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.121 { 00:21:44.121 "cntlid": 127, 00:21:44.121 "qid": 0, 00:21:44.121 "state": "enabled", 00:21:44.121 "thread": "nvmf_tgt_poll_group_000", 00:21:44.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:44.121 "listen_address": { 00:21:44.121 "trtype": "TCP", 00:21:44.121 "adrfam": "IPv4", 00:21:44.121 "traddr": "10.0.0.2", 00:21:44.121 "trsvcid": "4420" 00:21:44.121 }, 00:21:44.121 "peer_address": { 00:21:44.121 "trtype": "TCP", 00:21:44.121 "adrfam": "IPv4", 00:21:44.121 "traddr": "10.0.0.1", 00:21:44.121 "trsvcid": "37934" 00:21:44.121 }, 00:21:44.121 "auth": { 00:21:44.121 "state": "completed", 00:21:44.121 "digest": "sha512", 00:21:44.121 "dhgroup": "ffdhe4096" 00:21:44.121 } 00:21:44.121 } 00:21:44.121 ]' 00:21:44.121 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.121 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.121 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.380 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:44.380 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.380 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.380 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.380 02:43:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.637 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:44.637 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:45.203 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.203 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.203 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.203 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.204 02:43:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.771 00:21:45.771 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.771 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.771 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.771 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.771 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.771 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.771 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.771 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.771 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.771 { 00:21:45.771 "cntlid": 129, 00:21:45.771 "qid": 0, 00:21:45.771 "state": "enabled", 00:21:45.771 "thread": "nvmf_tgt_poll_group_000", 00:21:45.771 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:45.771 "listen_address": { 00:21:45.771 "trtype": "TCP", 00:21:45.771 "adrfam": "IPv4", 00:21:45.771 "traddr": "10.0.0.2", 00:21:45.771 "trsvcid": "4420" 00:21:45.771 }, 00:21:45.771 "peer_address": { 00:21:45.771 "trtype": "TCP", 00:21:45.771 "adrfam": "IPv4", 00:21:45.771 "traddr": "10.0.0.1", 00:21:45.771 "trsvcid": "37962" 00:21:45.771 }, 00:21:45.771 "auth": { 00:21:45.771 "state": "completed", 00:21:45.771 "digest": "sha512", 00:21:45.771 "dhgroup": "ffdhe6144" 00:21:45.771 } 00:21:45.771 } 00:21:45.771 ]' 00:21:45.771 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.771 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.771 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.030 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:46.030 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.030 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.030 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.030 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.030 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:46.030 02:43:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:46.597 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.597 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:46.597 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.597 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.856 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:47.423 00:21:47.423 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.423 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.423 02:43:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.423 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.423 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.423 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.423 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.423 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.423 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.423 { 00:21:47.423 "cntlid": 131, 00:21:47.423 "qid": 0, 00:21:47.423 "state": "enabled", 00:21:47.423 "thread": "nvmf_tgt_poll_group_000", 00:21:47.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:47.423 "listen_address": { 00:21:47.423 "trtype": "TCP", 00:21:47.423 "adrfam": "IPv4", 00:21:47.423 "traddr": "10.0.0.2", 00:21:47.423 "trsvcid": "4420" 00:21:47.423 }, 00:21:47.423 "peer_address": { 00:21:47.423 "trtype": "TCP", 00:21:47.423 "adrfam": "IPv4", 00:21:47.423 "traddr": "10.0.0.1", 00:21:47.423 "trsvcid": "37994" 00:21:47.423 }, 00:21:47.423 "auth": { 00:21:47.423 "state": "completed", 00:21:47.423 "digest": "sha512", 00:21:47.423 "dhgroup": "ffdhe6144" 00:21:47.423 } 00:21:47.423 } 00:21:47.423 ]' 00:21:47.423 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.423 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.682 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.682 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:47.682 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.682 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.682 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.682 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.940 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:47.940 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:48.508 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.508 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:48.508 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.508 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.508 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.508 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.508 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.508 02:43:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:48.508 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:48.508 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.508 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.508 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:48.508 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:48.508 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.508 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.508 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.508 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.508 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.508 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.508 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.508 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:49.076 00:21:49.076 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.076 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.076 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.076 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.076 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.076 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.076 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.076 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.076 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.076 { 00:21:49.076 "cntlid": 133, 00:21:49.076 "qid": 0, 00:21:49.076 "state": "enabled", 00:21:49.076 "thread": "nvmf_tgt_poll_group_000", 00:21:49.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:49.076 "listen_address": { 00:21:49.076 "trtype": "TCP", 00:21:49.076 "adrfam": "IPv4", 00:21:49.076 "traddr": "10.0.0.2", 00:21:49.076 "trsvcid": "4420" 00:21:49.076 }, 00:21:49.076 "peer_address": { 00:21:49.076 "trtype": "TCP", 00:21:49.076 "adrfam": "IPv4", 00:21:49.076 "traddr": "10.0.0.1", 00:21:49.076 "trsvcid": "38028" 00:21:49.076 }, 00:21:49.076 "auth": { 00:21:49.076 "state": "completed", 00:21:49.076 "digest": "sha512", 00:21:49.076 "dhgroup": "ffdhe6144" 00:21:49.076 } 00:21:49.076 } 00:21:49.076 ]' 00:21:49.076 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.335 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.335 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.335 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:49.335 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.335 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.335 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.335 02:43:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.593 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:49.593 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.160 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:50.161 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.161 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.161 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.161 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:50.161 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.161 02:43:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.728 00:21:50.728 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.728 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.728 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.728 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.728 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.728 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.728 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.728 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.728 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.728 { 00:21:50.728 "cntlid": 135, 00:21:50.728 "qid": 0, 00:21:50.728 "state": "enabled", 00:21:50.728 "thread": "nvmf_tgt_poll_group_000", 00:21:50.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:50.728 "listen_address": { 00:21:50.728 "trtype": "TCP", 00:21:50.728 "adrfam": "IPv4", 00:21:50.728 "traddr": "10.0.0.2", 00:21:50.728 "trsvcid": "4420" 00:21:50.728 }, 00:21:50.728 "peer_address": { 00:21:50.728 "trtype": "TCP", 00:21:50.728 "adrfam": "IPv4", 00:21:50.728 "traddr": "10.0.0.1", 00:21:50.728 "trsvcid": "57162" 00:21:50.728 }, 00:21:50.728 "auth": { 00:21:50.728 "state": "completed", 00:21:50.728 "digest": "sha512", 00:21:50.728 "dhgroup": "ffdhe6144" 00:21:50.728 } 00:21:50.728 } 00:21:50.728 ]' 00:21:50.728 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.987 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.987 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.987 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.987 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.987 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.987 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.987 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.246 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:51.246 02:43:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:51.821 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.821 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.821 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:51.821 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.821 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.821 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.821 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.821 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.821 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.822 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.551 00:21:52.551 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.551 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.551 02:43:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.551 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.551 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.551 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.551 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.551 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.551 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.551 { 00:21:52.551 "cntlid": 137, 00:21:52.551 "qid": 0, 00:21:52.551 "state": "enabled", 00:21:52.551 "thread": "nvmf_tgt_poll_group_000", 00:21:52.551 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:52.551 "listen_address": { 00:21:52.551 "trtype": "TCP", 00:21:52.551 "adrfam": "IPv4", 00:21:52.551 "traddr": "10.0.0.2", 00:21:52.551 "trsvcid": "4420" 00:21:52.551 }, 00:21:52.551 "peer_address": { 00:21:52.551 "trtype": "TCP", 00:21:52.551 "adrfam": "IPv4", 00:21:52.551 "traddr": "10.0.0.1", 00:21:52.551 "trsvcid": "57182" 00:21:52.551 }, 00:21:52.551 "auth": { 00:21:52.551 "state": "completed", 00:21:52.551 "digest": "sha512", 00:21:52.551 "dhgroup": "ffdhe8192" 00:21:52.551 } 00:21:52.551 } 00:21:52.551 ]' 00:21:52.551 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.810 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.810 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.810 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.810 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.810 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.810 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.810 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.069 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:53.069 02:43:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:21:53.638 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.638 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:53.638 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.639 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.208 00:21:54.208 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:54.208 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:54.208 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.467 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.467 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.467 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.467 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.467 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.467 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:54.467 { 00:21:54.467 "cntlid": 139, 00:21:54.467 "qid": 0, 00:21:54.467 "state": "enabled", 00:21:54.467 "thread": "nvmf_tgt_poll_group_000", 00:21:54.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:54.467 "listen_address": { 00:21:54.467 "trtype": "TCP", 00:21:54.467 "adrfam": "IPv4", 00:21:54.467 "traddr": "10.0.0.2", 00:21:54.467 "trsvcid": "4420" 00:21:54.467 }, 00:21:54.467 "peer_address": { 00:21:54.467 "trtype": "TCP", 00:21:54.467 "adrfam": "IPv4", 00:21:54.467 "traddr": "10.0.0.1", 00:21:54.467 "trsvcid": "57214" 00:21:54.467 }, 00:21:54.467 "auth": { 00:21:54.467 "state": "completed", 00:21:54.467 "digest": "sha512", 00:21:54.467 "dhgroup": "ffdhe8192" 00:21:54.467 } 00:21:54.467 } 00:21:54.467 ]' 00:21:54.467 02:43:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:54.467 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.467 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:54.467 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.467 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.467 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.467 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.467 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.725 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:54.725 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: --dhchap-ctrl-secret DHHC-1:02:NTYxNzAyMzE3MWNkNzEyYmZmZDViNmRjM2Q0MWVlMjZiNTBiNjYyY2FjM2E3NmRjR1Ac0Q==: 00:21:55.293 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.293 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:55.293 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.293 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.293 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.293 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:55.293 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.293 02:43:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:55.552 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:55.552 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.552 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.552 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:55.552 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:55.552 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.552 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.552 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.552 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.552 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.552 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.552 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:55.552 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.119 00:21:56.119 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.119 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.119 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.119 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.119 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.119 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.119 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.119 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.119 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.119 { 00:21:56.119 "cntlid": 141, 00:21:56.119 "qid": 0, 00:21:56.119 "state": "enabled", 00:21:56.119 "thread": "nvmf_tgt_poll_group_000", 00:21:56.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:56.119 "listen_address": { 00:21:56.119 "trtype": "TCP", 00:21:56.119 "adrfam": "IPv4", 00:21:56.119 "traddr": "10.0.0.2", 00:21:56.119 "trsvcid": "4420" 00:21:56.119 }, 00:21:56.119 "peer_address": { 00:21:56.119 "trtype": "TCP", 00:21:56.119 "adrfam": "IPv4", 00:21:56.119 "traddr": "10.0.0.1", 00:21:56.119 "trsvcid": "57248" 00:21:56.119 }, 00:21:56.119 "auth": { 00:21:56.119 "state": "completed", 00:21:56.119 "digest": "sha512", 00:21:56.119 "dhgroup": "ffdhe8192" 00:21:56.119 } 00:21:56.119 } 00:21:56.119 ]' 00:21:56.119 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.378 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.378 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.378 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.378 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.378 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.378 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.378 02:43:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.636 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:56.636 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:01:NzkwMDcwZjFlZjlkZjc1YWIwZTFiY2Y4ZTQ5ZjE0YznwHFHZ: 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.203 02:43:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.770 00:21:57.770 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.770 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.770 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.028 { 00:21:58.028 "cntlid": 143, 00:21:58.028 "qid": 0, 00:21:58.028 "state": "enabled", 00:21:58.028 "thread": "nvmf_tgt_poll_group_000", 00:21:58.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:58.028 "listen_address": { 00:21:58.028 "trtype": "TCP", 00:21:58.028 "adrfam": "IPv4", 00:21:58.028 "traddr": "10.0.0.2", 00:21:58.028 "trsvcid": "4420" 00:21:58.028 }, 00:21:58.028 "peer_address": { 00:21:58.028 "trtype": "TCP", 00:21:58.028 "adrfam": "IPv4", 00:21:58.028 "traddr": "10.0.0.1", 00:21:58.028 "trsvcid": "57268" 00:21:58.028 }, 00:21:58.028 "auth": { 00:21:58.028 "state": "completed", 00:21:58.028 "digest": "sha512", 00:21:58.028 "dhgroup": "ffdhe8192" 00:21:58.028 } 00:21:58.028 } 00:21:58.028 ]' 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.028 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.286 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:58.286 02:43:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:21:58.852 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.852 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.852 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:58.852 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.852 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.852 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.852 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:58.852 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:58.852 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:58.852 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.852 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.852 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:59.110 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:59.110 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.110 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.110 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:59.110 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:59.110 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.110 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.110 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.110 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.110 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.110 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.110 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.110 02:43:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.678 00:21:59.678 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:59.678 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:59.678 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:59.937 { 00:21:59.937 "cntlid": 145, 00:21:59.937 "qid": 0, 00:21:59.937 "state": "enabled", 00:21:59.937 "thread": "nvmf_tgt_poll_group_000", 00:21:59.937 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:59.937 "listen_address": { 00:21:59.937 "trtype": "TCP", 00:21:59.937 "adrfam": "IPv4", 00:21:59.937 "traddr": "10.0.0.2", 00:21:59.937 "trsvcid": "4420" 00:21:59.937 }, 00:21:59.937 "peer_address": { 00:21:59.937 "trtype": "TCP", 00:21:59.937 "adrfam": "IPv4", 00:21:59.937 "traddr": "10.0.0.1", 00:21:59.937 "trsvcid": "57298" 00:21:59.937 }, 00:21:59.937 "auth": { 00:21:59.937 "state": "completed", 00:21:59.937 "digest": "sha512", 00:21:59.937 "dhgroup": "ffdhe8192" 00:21:59.937 } 00:21:59.937 } 00:21:59.937 ]' 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.937 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.196 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:22:00.196 02:43:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDc0YmMwMDVkNTI3ZDk3M2MxZDk5MWNiMzkwNDRlZmM0M2U0YzU3ZGViNjVhNWNhjVxILA==: --dhchap-ctrl-secret DHHC-1:03:NjBmMjczNWM1OGY1MjQ1ODljODNkYzAwZDYxMmRjMTZmMmNlZmNiNzY4ZGJmNjVlYTNiMzIxNzkwZGI1ODY4NE3Pmz0=: 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:00.762 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:01.329 request: 00:22:01.329 { 00:22:01.329 "name": "nvme0", 00:22:01.329 "trtype": "tcp", 00:22:01.329 "traddr": "10.0.0.2", 00:22:01.329 "adrfam": "ipv4", 00:22:01.329 "trsvcid": "4420", 00:22:01.329 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:01.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:01.329 "prchk_reftag": false, 00:22:01.329 "prchk_guard": false, 00:22:01.329 "hdgst": false, 00:22:01.329 "ddgst": false, 00:22:01.329 "dhchap_key": "key2", 00:22:01.329 "allow_unrecognized_csi": false, 00:22:01.329 "method": "bdev_nvme_attach_controller", 00:22:01.329 "req_id": 1 00:22:01.329 } 00:22:01.329 Got JSON-RPC error response 00:22:01.329 response: 00:22:01.329 { 00:22:01.329 "code": -5, 00:22:01.329 "message": "Input/output error" 00:22:01.329 } 00:22:01.329 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:01.329 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.329 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.329 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.329 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:01.329 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.329 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.329 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.330 02:43:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:01.588 request: 00:22:01.588 { 00:22:01.588 "name": "nvme0", 00:22:01.588 "trtype": "tcp", 00:22:01.588 "traddr": "10.0.0.2", 00:22:01.588 "adrfam": "ipv4", 00:22:01.588 "trsvcid": "4420", 00:22:01.588 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:01.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:01.588 "prchk_reftag": false, 00:22:01.588 "prchk_guard": false, 00:22:01.588 "hdgst": false, 00:22:01.588 "ddgst": false, 00:22:01.588 "dhchap_key": "key1", 00:22:01.588 "dhchap_ctrlr_key": "ckey2", 00:22:01.588 "allow_unrecognized_csi": false, 00:22:01.588 "method": "bdev_nvme_attach_controller", 00:22:01.588 "req_id": 1 00:22:01.588 } 00:22:01.588 Got JSON-RPC error response 00:22:01.588 response: 00:22:01.588 { 00:22:01.588 "code": -5, 00:22:01.588 "message": "Input/output error" 00:22:01.588 } 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:01.588 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.589 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.589 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.589 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:02.155 request: 00:22:02.155 { 00:22:02.155 "name": "nvme0", 00:22:02.155 "trtype": "tcp", 00:22:02.155 "traddr": "10.0.0.2", 00:22:02.155 "adrfam": "ipv4", 00:22:02.155 "trsvcid": "4420", 00:22:02.155 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:02.155 "prchk_reftag": false, 00:22:02.155 "prchk_guard": false, 00:22:02.155 "hdgst": false, 00:22:02.155 "ddgst": false, 00:22:02.155 "dhchap_key": "key1", 00:22:02.155 "dhchap_ctrlr_key": "ckey1", 00:22:02.155 "allow_unrecognized_csi": false, 00:22:02.155 "method": "bdev_nvme_attach_controller", 00:22:02.155 "req_id": 1 00:22:02.155 } 00:22:02.155 Got JSON-RPC error response 00:22:02.155 response: 00:22:02.155 { 00:22:02.155 "code": -5, 00:22:02.155 "message": "Input/output error" 00:22:02.155 } 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 982851 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 982851 ']' 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 982851 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 982851 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 982851' 00:22:02.155 killing process with pid 982851 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 982851 00:22:02.155 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 982851 00:22:02.414 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:02.414 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:02.414 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:02.414 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.414 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1004408 00:22:02.414 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:02.414 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1004408 00:22:02.414 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1004408 ']' 00:22:02.414 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.414 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.414 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.414 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.414 02:43:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1004408 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1004408 ']' 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.672 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.932 null0 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1Ri 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.mTC ]] 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.mTC 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.qRC 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.bsT ]] 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.bsT 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.69x 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.fjL ]] 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.fjL 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bdE 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:02.932 02:43:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:03.868 nvme0n1 00:22:03.868 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.868 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.868 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.868 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.868 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.868 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.868 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.868 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.868 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.868 { 00:22:03.868 "cntlid": 1, 00:22:03.868 "qid": 0, 00:22:03.868 "state": "enabled", 00:22:03.868 "thread": "nvmf_tgt_poll_group_000", 00:22:03.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:03.868 "listen_address": { 00:22:03.868 "trtype": "TCP", 00:22:03.868 "adrfam": "IPv4", 00:22:03.868 "traddr": "10.0.0.2", 00:22:03.868 "trsvcid": "4420" 00:22:03.868 }, 00:22:03.868 "peer_address": { 00:22:03.868 "trtype": "TCP", 00:22:03.868 "adrfam": "IPv4", 00:22:03.868 "traddr": "10.0.0.1", 00:22:03.868 "trsvcid": "36214" 00:22:03.869 }, 00:22:03.869 "auth": { 00:22:03.869 "state": "completed", 00:22:03.869 "digest": "sha512", 00:22:03.869 "dhgroup": "ffdhe8192" 00:22:03.869 } 00:22:03.869 } 00:22:03.869 ]' 00:22:03.869 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.129 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.129 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.129 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.129 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.129 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.129 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.129 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.388 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:22:04.388 02:43:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:22:04.958 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.958 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:04.958 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.958 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.958 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.958 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:04.958 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.958 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.958 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.958 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:04.958 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.216 request: 00:22:05.216 { 00:22:05.216 "name": "nvme0", 00:22:05.216 "trtype": "tcp", 00:22:05.216 "traddr": "10.0.0.2", 00:22:05.216 "adrfam": "ipv4", 00:22:05.216 "trsvcid": "4420", 00:22:05.216 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.216 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:05.216 "prchk_reftag": false, 00:22:05.216 "prchk_guard": false, 00:22:05.216 "hdgst": false, 00:22:05.216 "ddgst": false, 00:22:05.216 "dhchap_key": "key3", 00:22:05.216 "allow_unrecognized_csi": false, 00:22:05.216 "method": "bdev_nvme_attach_controller", 00:22:05.216 "req_id": 1 00:22:05.216 } 00:22:05.216 Got JSON-RPC error response 00:22:05.216 response: 00:22:05.216 { 00:22:05.216 "code": -5, 00:22:05.216 "message": "Input/output error" 00:22:05.216 } 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:05.216 02:43:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:05.475 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:05.475 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:05.475 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:05.475 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:05.475 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.475 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:05.475 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.475 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.475 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.475 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.733 request: 00:22:05.733 { 00:22:05.733 "name": "nvme0", 00:22:05.733 "trtype": "tcp", 00:22:05.733 "traddr": "10.0.0.2", 00:22:05.733 "adrfam": "ipv4", 00:22:05.733 "trsvcid": "4420", 00:22:05.733 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:05.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:05.733 "prchk_reftag": false, 00:22:05.733 "prchk_guard": false, 00:22:05.733 "hdgst": false, 00:22:05.733 "ddgst": false, 00:22:05.733 "dhchap_key": "key3", 00:22:05.733 "allow_unrecognized_csi": false, 00:22:05.733 "method": "bdev_nvme_attach_controller", 00:22:05.733 "req_id": 1 00:22:05.733 } 00:22:05.733 Got JSON-RPC error response 00:22:05.733 response: 00:22:05.733 { 00:22:05.733 "code": -5, 00:22:05.733 "message": "Input/output error" 00:22:05.733 } 00:22:05.733 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:05.734 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:05.734 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:05.734 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:05.734 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:05.734 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:05.734 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:05.734 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.734 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.734 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:05.992 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:05.992 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:05.993 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:06.251 request: 00:22:06.251 { 00:22:06.251 "name": "nvme0", 00:22:06.251 "trtype": "tcp", 00:22:06.251 "traddr": "10.0.0.2", 00:22:06.252 "adrfam": "ipv4", 00:22:06.252 "trsvcid": "4420", 00:22:06.252 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:06.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:06.252 "prchk_reftag": false, 00:22:06.252 "prchk_guard": false, 00:22:06.252 "hdgst": false, 00:22:06.252 "ddgst": false, 00:22:06.252 "dhchap_key": "key0", 00:22:06.252 "dhchap_ctrlr_key": "key1", 00:22:06.252 "allow_unrecognized_csi": false, 00:22:06.252 "method": "bdev_nvme_attach_controller", 00:22:06.252 "req_id": 1 00:22:06.252 } 00:22:06.252 Got JSON-RPC error response 00:22:06.252 response: 00:22:06.252 { 00:22:06.252 "code": -5, 00:22:06.252 "message": "Input/output error" 00:22:06.252 } 00:22:06.252 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:06.252 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:06.252 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:06.252 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:06.252 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:06.252 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:06.252 02:43:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:06.510 nvme0n1 00:22:06.510 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:06.510 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:06.510 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.769 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.769 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.769 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.028 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:07.028 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.028 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.028 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.028 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:07.028 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:07.028 02:43:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:07.596 nvme0n1 00:22:07.596 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:07.596 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:07.596 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.854 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.854 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:07.854 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.854 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.854 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.854 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:07.854 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:07.854 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.113 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.113 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:22:08.113 02:43:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: --dhchap-ctrl-secret DHHC-1:03:M2NkNGZhYjBhM2M2NzU1ODNmNmE3Y2Q2ZTk0YTQ0MjE3NTY4NGM3ZjJmNzkzMWJkN2RhNzcwYWVhY2Y4N2E1NObY3nY=: 00:22:08.681 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:08.681 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:08.681 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:08.681 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:08.681 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:08.681 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:08.681 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:08.681 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.681 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.940 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:08.940 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:08.940 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:08.940 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:08.940 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.940 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:08.940 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.940 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:08.940 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:08.940 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:09.199 request: 00:22:09.199 { 00:22:09.199 "name": "nvme0", 00:22:09.199 "trtype": "tcp", 00:22:09.199 "traddr": "10.0.0.2", 00:22:09.199 "adrfam": "ipv4", 00:22:09.199 "trsvcid": "4420", 00:22:09.199 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:09.199 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:09.199 "prchk_reftag": false, 00:22:09.199 "prchk_guard": false, 00:22:09.199 "hdgst": false, 00:22:09.199 "ddgst": false, 00:22:09.199 "dhchap_key": "key1", 00:22:09.199 "allow_unrecognized_csi": false, 00:22:09.199 "method": "bdev_nvme_attach_controller", 00:22:09.199 "req_id": 1 00:22:09.199 } 00:22:09.199 Got JSON-RPC error response 00:22:09.199 response: 00:22:09.199 { 00:22:09.199 "code": -5, 00:22:09.199 "message": "Input/output error" 00:22:09.199 } 00:22:09.199 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:09.199 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:09.199 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:09.199 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:09.199 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:09.199 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:09.199 02:43:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:10.135 nvme0n1 00:22:10.135 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:10.135 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:10.135 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.135 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.135 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.135 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.393 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:10.393 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.393 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.393 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.393 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:10.393 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:10.393 02:43:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:10.652 nvme0n1 00:22:10.652 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:10.652 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:10.652 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.911 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.911 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.911 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: '' 2s 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: ]] 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:YTY1NzFhNWFjNjNjNDgwOTk0MmQ4NjRlZDcyM2Q0ZjRvDGkZ: 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:11.170 02:43:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: 2s 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: ]] 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NmFmYTNiZmMwNDJhN2JmYTYxMTdlNzU1YmY3NGI4MzkyMmM0NGQyNzM5OWU3Mzc0GXLjow==: 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:13.073 02:43:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:15.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:15.606 02:43:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:15.865 nvme0n1 00:22:15.865 02:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:15.865 02:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.865 02:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.865 02:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.865 02:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:15.865 02:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.432 02:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:16.433 02:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:16.433 02:43:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.692 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.692 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:16.692 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.692 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.692 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.692 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:16.692 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:16.951 02:43:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:17.518 request: 00:22:17.518 { 00:22:17.518 "name": "nvme0", 00:22:17.518 "dhchap_key": "key1", 00:22:17.518 "dhchap_ctrlr_key": "key3", 00:22:17.518 "method": "bdev_nvme_set_keys", 00:22:17.518 "req_id": 1 00:22:17.518 } 00:22:17.518 Got JSON-RPC error response 00:22:17.518 response: 00:22:17.518 { 00:22:17.518 "code": -13, 00:22:17.518 "message": "Permission denied" 00:22:17.518 } 00:22:17.518 02:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:17.518 02:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:17.518 02:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:17.518 02:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:17.518 02:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:17.518 02:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:17.518 02:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.776 02:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:17.776 02:43:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:18.712 02:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:18.712 02:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:18.713 02:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.971 02:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:18.971 02:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:18.971 02:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.971 02:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.971 02:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.971 02:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.971 02:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:18.971 02:43:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:19.539 nvme0n1 00:22:19.798 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:19.798 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.798 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.798 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.798 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:19.798 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:19.798 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:19.798 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:19.798 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:19.798 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:19.798 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:19.798 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:19.798 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:20.056 request: 00:22:20.056 { 00:22:20.056 "name": "nvme0", 00:22:20.056 "dhchap_key": "key2", 00:22:20.056 "dhchap_ctrlr_key": "key0", 00:22:20.056 "method": "bdev_nvme_set_keys", 00:22:20.056 "req_id": 1 00:22:20.056 } 00:22:20.056 Got JSON-RPC error response 00:22:20.056 response: 00:22:20.057 { 00:22:20.057 "code": -13, 00:22:20.057 "message": "Permission denied" 00:22:20.057 } 00:22:20.057 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:20.057 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:20.057 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:20.057 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:20.057 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:20.057 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:20.057 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.315 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:20.316 02:43:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:21.692 02:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:21.692 02:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:21.692 02:43:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 982960 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 982960 ']' 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 982960 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 982960 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 982960' 00:22:21.692 killing process with pid 982960 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 982960 00:22:21.692 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 982960 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:21.951 rmmod nvme_tcp 00:22:21.951 rmmod nvme_fabrics 00:22:21.951 rmmod nvme_keyring 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1004408 ']' 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1004408 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1004408 ']' 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1004408 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1004408 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1004408' 00:22:21.951 killing process with pid 1004408 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1004408 00:22:21.951 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1004408 00:22:22.210 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:22.210 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:22.210 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:22.210 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:22.211 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:22.211 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:22.211 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:22.211 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:22.211 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:22.211 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:22.211 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:22.211 02:43:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.755 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:24.755 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1Ri /tmp/spdk.key-sha256.qRC /tmp/spdk.key-sha384.69x /tmp/spdk.key-sha512.bdE /tmp/spdk.key-sha512.mTC /tmp/spdk.key-sha384.bsT /tmp/spdk.key-sha256.fjL '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:24.755 00:22:24.755 real 2m31.660s 00:22:24.755 user 5m49.890s 00:22:24.755 sys 0m24.088s 00:22:24.755 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.755 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.755 ************************************ 00:22:24.755 END TEST nvmf_auth_target 00:22:24.755 ************************************ 00:22:24.755 02:43:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:24.756 02:43:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:24.756 02:43:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:24.756 02:43:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.756 02:43:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:24.756 ************************************ 00:22:24.756 START TEST nvmf_bdevio_no_huge 00:22:24.756 ************************************ 00:22:24.756 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:24.756 * Looking for test storage... 00:22:24.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:24.756 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:24.756 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:24.756 02:43:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:24.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.756 --rc genhtml_branch_coverage=1 00:22:24.756 --rc genhtml_function_coverage=1 00:22:24.756 --rc genhtml_legend=1 00:22:24.756 --rc geninfo_all_blocks=1 00:22:24.756 --rc geninfo_unexecuted_blocks=1 00:22:24.756 00:22:24.756 ' 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:24.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.756 --rc genhtml_branch_coverage=1 00:22:24.756 --rc genhtml_function_coverage=1 00:22:24.756 --rc genhtml_legend=1 00:22:24.756 --rc geninfo_all_blocks=1 00:22:24.756 --rc geninfo_unexecuted_blocks=1 00:22:24.756 00:22:24.756 ' 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:24.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.756 --rc genhtml_branch_coverage=1 00:22:24.756 --rc genhtml_function_coverage=1 00:22:24.756 --rc genhtml_legend=1 00:22:24.756 --rc geninfo_all_blocks=1 00:22:24.756 --rc geninfo_unexecuted_blocks=1 00:22:24.756 00:22:24.756 ' 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:24.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.756 --rc genhtml_branch_coverage=1 00:22:24.756 --rc genhtml_function_coverage=1 00:22:24.756 --rc genhtml_legend=1 00:22:24.756 --rc geninfo_all_blocks=1 00:22:24.756 --rc geninfo_unexecuted_blocks=1 00:22:24.756 00:22:24.756 ' 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.756 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:24.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:24.757 02:43:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:30.068 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:30.068 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:30.068 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:30.069 Found net devices under 0000:af:00.0: cvl_0_0 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:30.069 Found net devices under 0000:af:00.1: cvl_0_1 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:30.069 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:30.327 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:30.327 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:30.327 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:30.327 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:30.327 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:30.327 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:30.327 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:30.327 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:30.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:30.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:22:30.327 00:22:30.327 --- 10.0.0.2 ping statistics --- 00:22:30.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.327 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:22:30.327 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:30.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:30.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:22:30.327 00:22:30.327 --- 10.0.0.1 ping statistics --- 00:22:30.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:30.327 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:22:30.328 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:30.328 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:30.328 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:30.328 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:30.328 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:30.328 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:30.328 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:30.328 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:30.328 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:30.328 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:30.328 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:30.328 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:30.328 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.586 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1011162 00:22:30.586 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:30.586 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1011162 00:22:30.586 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1011162 ']' 00:22:30.586 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:30.586 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:30.586 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:30.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:30.586 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:30.586 02:44:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.586 [2024-12-16 02:44:01.040141] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:30.586 [2024-12-16 02:44:01.040185] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:30.586 [2024-12-16 02:44:01.124582] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:30.586 [2024-12-16 02:44:01.160467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:30.586 [2024-12-16 02:44:01.160502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:30.586 [2024-12-16 02:44:01.160509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:30.586 [2024-12-16 02:44:01.160515] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:30.586 [2024-12-16 02:44:01.160520] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:30.586 [2024-12-16 02:44:01.161565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:30.586 [2024-12-16 02:44:01.161677] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:22:30.586 [2024-12-16 02:44:01.161798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:30.586 [2024-12-16 02:44:01.161800] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.844 [2024-12-16 02:44:01.302170] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.844 Malloc0 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:30.844 [2024-12-16 02:44:01.346448] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:30.844 { 00:22:30.844 "params": { 00:22:30.844 "name": "Nvme$subsystem", 00:22:30.844 "trtype": "$TEST_TRANSPORT", 00:22:30.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:30.844 "adrfam": "ipv4", 00:22:30.844 "trsvcid": "$NVMF_PORT", 00:22:30.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:30.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:30.844 "hdgst": ${hdgst:-false}, 00:22:30.844 "ddgst": ${ddgst:-false} 00:22:30.844 }, 00:22:30.844 "method": "bdev_nvme_attach_controller" 00:22:30.844 } 00:22:30.844 EOF 00:22:30.844 )") 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:30.844 02:44:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:30.844 "params": { 00:22:30.844 "name": "Nvme1", 00:22:30.844 "trtype": "tcp", 00:22:30.844 "traddr": "10.0.0.2", 00:22:30.844 "adrfam": "ipv4", 00:22:30.844 "trsvcid": "4420", 00:22:30.844 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.844 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.844 "hdgst": false, 00:22:30.844 "ddgst": false 00:22:30.844 }, 00:22:30.844 "method": "bdev_nvme_attach_controller" 00:22:30.844 }' 00:22:30.844 [2024-12-16 02:44:01.395821] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:30.844 [2024-12-16 02:44:01.395872] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1011369 ] 00:22:30.844 [2024-12-16 02:44:01.473970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:31.102 [2024-12-16 02:44:01.511404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:31.102 [2024-12-16 02:44:01.511508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.102 [2024-12-16 02:44:01.511509] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.102 I/O targets: 00:22:31.102 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:31.102 00:22:31.102 00:22:31.102 CUnit - A unit testing framework for C - Version 2.1-3 00:22:31.102 http://cunit.sourceforge.net/ 00:22:31.102 00:22:31.102 00:22:31.102 Suite: bdevio tests on: Nvme1n1 00:22:31.102 Test: blockdev write read block ...passed 00:22:31.360 Test: blockdev write zeroes read block ...passed 00:22:31.360 Test: blockdev write zeroes read no split ...passed 00:22:31.360 Test: blockdev write zeroes read split ...passed 00:22:31.360 Test: blockdev write zeroes read split partial ...passed 00:22:31.360 Test: blockdev reset ...[2024-12-16 02:44:01.841645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:31.360 [2024-12-16 02:44:01.841703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2423d00 (9): Bad file descriptor 00:22:31.360 [2024-12-16 02:44:01.854319] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:31.360 passed 00:22:31.360 Test: blockdev write read 8 blocks ...passed 00:22:31.360 Test: blockdev write read size > 128k ...passed 00:22:31.360 Test: blockdev write read invalid size ...passed 00:22:31.360 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:31.360 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:31.360 Test: blockdev write read max offset ...passed 00:22:31.619 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:31.619 Test: blockdev writev readv 8 blocks ...passed 00:22:31.620 Test: blockdev writev readv 30 x 1block ...passed 00:22:31.620 Test: blockdev writev readv block ...passed 00:22:31.620 Test: blockdev writev readv size > 128k ...passed 00:22:31.620 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:31.620 Test: blockdev comparev and writev ...[2024-12-16 02:44:02.067610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.620 [2024-12-16 02:44:02.067645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.620 [2024-12-16 02:44:02.067660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.620 [2024-12-16 02:44:02.067668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:31.620 [2024-12-16 02:44:02.067907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.620 [2024-12-16 02:44:02.067919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:31.620 [2024-12-16 02:44:02.067931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.620 [2024-12-16 02:44:02.067938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:31.620 [2024-12-16 02:44:02.068194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.620 [2024-12-16 02:44:02.068205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:31.620 [2024-12-16 02:44:02.068216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.620 [2024-12-16 02:44:02.068224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:31.620 [2024-12-16 02:44:02.068458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.620 [2024-12-16 02:44:02.068469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:31.620 [2024-12-16 02:44:02.068480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:31.620 [2024-12-16 02:44:02.068488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:31.620 passed 00:22:31.620 Test: blockdev nvme passthru rw ...passed 00:22:31.620 Test: blockdev nvme passthru vendor specific ...[2024-12-16 02:44:02.150200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.620 [2024-12-16 02:44:02.150217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:31.620 [2024-12-16 02:44:02.150319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.620 [2024-12-16 02:44:02.150329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:31.620 [2024-12-16 02:44:02.150426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.620 [2024-12-16 02:44:02.150436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:31.620 [2024-12-16 02:44:02.150536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:31.620 [2024-12-16 02:44:02.150547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:31.620 passed 00:22:31.620 Test: blockdev nvme admin passthru ...passed 00:22:31.620 Test: blockdev copy ...passed 00:22:31.620 00:22:31.620 Run Summary: Type Total Ran Passed Failed Inactive 00:22:31.620 suites 1 1 n/a 0 0 00:22:31.620 tests 23 23 23 0 0 00:22:31.620 asserts 152 152 152 0 n/a 00:22:31.620 00:22:31.620 Elapsed time = 1.069 seconds 00:22:31.878 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:31.878 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.878 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.879 rmmod nvme_tcp 00:22:31.879 rmmod nvme_fabrics 00:22:31.879 rmmod nvme_keyring 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1011162 ']' 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1011162 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1011162 ']' 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1011162 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.879 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1011162 00:22:32.138 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:32.138 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:32.138 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1011162' 00:22:32.138 killing process with pid 1011162 00:22:32.138 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1011162 00:22:32.138 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1011162 00:22:32.396 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:32.396 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:32.396 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:32.396 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:32.396 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:32.396 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:32.396 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:32.396 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.396 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.396 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.396 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.396 02:44:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.298 02:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.298 00:22:34.298 real 0m10.068s 00:22:34.298 user 0m10.204s 00:22:34.298 sys 0m5.294s 00:22:34.298 02:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.298 02:44:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:34.298 ************************************ 00:22:34.298 END TEST nvmf_bdevio_no_huge 00:22:34.298 ************************************ 00:22:34.559 02:44:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:34.559 02:44:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:34.559 02:44:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.559 02:44:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:34.559 ************************************ 00:22:34.559 START TEST nvmf_tls 00:22:34.559 ************************************ 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:34.559 * Looking for test storage... 00:22:34.559 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.559 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:34.559 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.559 --rc genhtml_branch_coverage=1 00:22:34.559 --rc genhtml_function_coverage=1 00:22:34.559 --rc genhtml_legend=1 00:22:34.560 --rc geninfo_all_blocks=1 00:22:34.560 --rc geninfo_unexecuted_blocks=1 00:22:34.560 00:22:34.560 ' 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.560 --rc genhtml_branch_coverage=1 00:22:34.560 --rc genhtml_function_coverage=1 00:22:34.560 --rc genhtml_legend=1 00:22:34.560 --rc geninfo_all_blocks=1 00:22:34.560 --rc geninfo_unexecuted_blocks=1 00:22:34.560 00:22:34.560 ' 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.560 --rc genhtml_branch_coverage=1 00:22:34.560 --rc genhtml_function_coverage=1 00:22:34.560 --rc genhtml_legend=1 00:22:34.560 --rc geninfo_all_blocks=1 00:22:34.560 --rc geninfo_unexecuted_blocks=1 00:22:34.560 00:22:34.560 ' 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:34.560 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.560 --rc genhtml_branch_coverage=1 00:22:34.560 --rc genhtml_function_coverage=1 00:22:34.560 --rc genhtml_legend=1 00:22:34.560 --rc geninfo_all_blocks=1 00:22:34.560 --rc geninfo_unexecuted_blocks=1 00:22:34.560 00:22:34.560 ' 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.560 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.820 02:44:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.259 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:40.260 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:40.260 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:40.260 Found net devices under 0000:af:00.0: cvl_0_0 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:40.260 Found net devices under 0000:af:00.1: cvl_0_1 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.260 02:44:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.519 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.519 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.519 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.519 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:22:40.778 00:22:40.778 --- 10.0.0.2 ping statistics --- 00:22:40.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.778 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:22:40.778 00:22:40.778 --- 10.0.0.1 ping statistics --- 00:22:40.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.778 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1015064 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1015064 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1015064 ']' 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.778 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.778 [2024-12-16 02:44:11.313545] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:40.778 [2024-12-16 02:44:11.313591] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.778 [2024-12-16 02:44:11.393565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.779 [2024-12-16 02:44:11.414578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.779 [2024-12-16 02:44:11.414613] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.779 [2024-12-16 02:44:11.414620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.779 [2024-12-16 02:44:11.414626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.779 [2024-12-16 02:44:11.414631] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.779 [2024-12-16 02:44:11.415110] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:41.038 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.038 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:41.038 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:41.038 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.038 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:41.038 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.038 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:41.038 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:41.038 true 00:22:41.297 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.297 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:41.297 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:41.297 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:41.297 02:44:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:41.556 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.556 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:41.814 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:41.814 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:41.814 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:41.814 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:41.814 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:42.074 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:42.074 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:42.074 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.074 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:42.333 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:42.333 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:42.333 02:44:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:42.591 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.592 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:42.592 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:42.592 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:42.592 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:42.850 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:42.850 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.gwSzkocZTq 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.eMML9GoPVw 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.gwSzkocZTq 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.eMML9GoPVw 00:22:43.109 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:43.369 02:44:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:43.628 02:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.gwSzkocZTq 00:22:43.628 02:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.gwSzkocZTq 00:22:43.628 02:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:43.886 [2024-12-16 02:44:14.291855] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.886 02:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:43.887 02:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:44.145 [2024-12-16 02:44:14.652771] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:44.146 [2024-12-16 02:44:14.652999] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:44.146 02:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:44.404 malloc0 00:22:44.404 02:44:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:44.404 02:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.gwSzkocZTq 00:22:44.663 02:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:44.922 02:44:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.gwSzkocZTq 00:22:54.900 Initializing NVMe Controllers 00:22:54.900 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:54.900 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:54.900 Initialization complete. Launching workers. 00:22:54.900 ======================================================== 00:22:54.900 Latency(us) 00:22:54.900 Device Information : IOPS MiB/s Average min max 00:22:54.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16949.21 66.21 3776.05 867.36 5181.85 00:22:54.900 ======================================================== 00:22:54.900 Total : 16949.21 66.21 3776.05 867.36 5181.85 00:22:54.900 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gwSzkocZTq 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gwSzkocZTq 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1017349 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1017349 /var/tmp/bdevperf.sock 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1017349 ']' 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.900 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:55.160 [2024-12-16 02:44:25.566197] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:55.160 [2024-12-16 02:44:25.566242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1017349 ] 00:22:55.160 [2024-12-16 02:44:25.639487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:55.160 [2024-12-16 02:44:25.661819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:55.160 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.160 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:55.160 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gwSzkocZTq 00:22:55.419 02:44:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:55.678 [2024-12-16 02:44:26.092594] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:55.678 TLSTESTn1 00:22:55.678 02:44:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:55.678 Running I/O for 10 seconds... 00:22:57.993 5352.00 IOPS, 20.91 MiB/s [2024-12-16T01:44:29.589Z] 5536.50 IOPS, 21.63 MiB/s [2024-12-16T01:44:30.524Z] 5522.33 IOPS, 21.57 MiB/s [2024-12-16T01:44:31.460Z] 5507.75 IOPS, 21.51 MiB/s [2024-12-16T01:44:32.395Z] 5503.60 IOPS, 21.50 MiB/s [2024-12-16T01:44:33.331Z] 5516.17 IOPS, 21.55 MiB/s [2024-12-16T01:44:34.707Z] 5516.00 IOPS, 21.55 MiB/s [2024-12-16T01:44:35.643Z] 5549.62 IOPS, 21.68 MiB/s [2024-12-16T01:44:36.579Z] 5555.00 IOPS, 21.70 MiB/s [2024-12-16T01:44:36.579Z] 5574.70 IOPS, 21.78 MiB/s 00:23:05.920 Latency(us) 00:23:05.920 [2024-12-16T01:44:36.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.920 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:05.920 Verification LBA range: start 0x0 length 0x2000 00:23:05.920 TLSTESTn1 : 10.02 5578.53 21.79 0.00 0.00 22909.99 4681.14 47185.92 00:23:05.920 [2024-12-16T01:44:36.579Z] =================================================================================================================== 00:23:05.920 [2024-12-16T01:44:36.579Z] Total : 5578.53 21.79 0.00 0.00 22909.99 4681.14 47185.92 00:23:05.920 { 00:23:05.920 "results": [ 00:23:05.920 { 00:23:05.920 "job": "TLSTESTn1", 00:23:05.920 "core_mask": "0x4", 00:23:05.920 "workload": "verify", 00:23:05.920 "status": "finished", 00:23:05.920 "verify_range": { 00:23:05.920 "start": 0, 00:23:05.920 "length": 8192 00:23:05.920 }, 00:23:05.920 "queue_depth": 128, 00:23:05.920 "io_size": 4096, 00:23:05.920 "runtime": 10.016086, 00:23:05.920 "iops": 5578.526382461173, 00:23:05.920 "mibps": 21.791118681488957, 00:23:05.920 "io_failed": 0, 00:23:05.920 "io_timeout": 0, 00:23:05.920 "avg_latency_us": 22909.98677258336, 00:23:05.920 "min_latency_us": 4681.142857142857, 00:23:05.920 "max_latency_us": 47185.92 00:23:05.920 } 00:23:05.920 ], 00:23:05.920 "core_count": 1 00:23:05.920 } 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1017349 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1017349 ']' 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1017349 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1017349 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1017349' 00:23:05.920 killing process with pid 1017349 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1017349 00:23:05.920 Received shutdown signal, test time was about 10.000000 seconds 00:23:05.920 00:23:05.920 Latency(us) 00:23:05.920 [2024-12-16T01:44:36.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.920 [2024-12-16T01:44:36.579Z] =================================================================================================================== 00:23:05.920 [2024-12-16T01:44:36.579Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1017349 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eMML9GoPVw 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eMML9GoPVw 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eMML9GoPVw 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.eMML9GoPVw 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1019130 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1019130 /var/tmp/bdevperf.sock 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1019130 ']' 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.920 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.179 [2024-12-16 02:44:36.583222] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:06.179 [2024-12-16 02:44:36.583267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019130 ] 00:23:06.179 [2024-12-16 02:44:36.650841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.179 [2024-12-16 02:44:36.673380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.179 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.179 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:06.179 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eMML9GoPVw 00:23:06.438 02:44:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:06.696 [2024-12-16 02:44:37.121441] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:06.696 [2024-12-16 02:44:37.126189] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:06.696 [2024-12-16 02:44:37.126647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ee0c0 (107): Transport endpoint is not connected 00:23:06.696 [2024-12-16 02:44:37.127639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20ee0c0 (9): Bad file descriptor 00:23:06.696 [2024-12-16 02:44:37.128640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:06.696 [2024-12-16 02:44:37.128653] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:06.696 [2024-12-16 02:44:37.128659] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:06.696 [2024-12-16 02:44:37.128666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:06.696 request: 00:23:06.696 { 00:23:06.696 "name": "TLSTEST", 00:23:06.696 "trtype": "tcp", 00:23:06.696 "traddr": "10.0.0.2", 00:23:06.696 "adrfam": "ipv4", 00:23:06.696 "trsvcid": "4420", 00:23:06.696 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:06.696 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:06.696 "prchk_reftag": false, 00:23:06.696 "prchk_guard": false, 00:23:06.696 "hdgst": false, 00:23:06.696 "ddgst": false, 00:23:06.696 "psk": "key0", 00:23:06.696 "allow_unrecognized_csi": false, 00:23:06.696 "method": "bdev_nvme_attach_controller", 00:23:06.696 "req_id": 1 00:23:06.696 } 00:23:06.696 Got JSON-RPC error response 00:23:06.696 response: 00:23:06.696 { 00:23:06.696 "code": -5, 00:23:06.696 "message": "Input/output error" 00:23:06.696 } 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1019130 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1019130 ']' 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1019130 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019130 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019130' 00:23:06.696 killing process with pid 1019130 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1019130 00:23:06.696 Received shutdown signal, test time was about 10.000000 seconds 00:23:06.696 00:23:06.696 Latency(us) 00:23:06.696 [2024-12-16T01:44:37.355Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:06.696 [2024-12-16T01:44:37.355Z] =================================================================================================================== 00:23:06.696 [2024-12-16T01:44:37.355Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1019130 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:06.696 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:06.697 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gwSzkocZTq 00:23:06.697 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:06.697 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gwSzkocZTq 00:23:06.697 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:06.697 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.697 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:06.697 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.697 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gwSzkocZTq 00:23:06.697 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gwSzkocZTq 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1019160 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1019160 /var/tmp/bdevperf.sock 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1019160 ']' 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.955 [2024-12-16 02:44:37.400133] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:06.955 [2024-12-16 02:44:37.400181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019160 ] 00:23:06.955 [2024-12-16 02:44:37.477123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.955 [2024-12-16 02:44:37.498147] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:06.955 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gwSzkocZTq 00:23:07.214 02:44:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:07.473 [2024-12-16 02:44:37.969393] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:07.473 [2024-12-16 02:44:37.978261] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:07.473 [2024-12-16 02:44:37.978283] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:07.473 [2024-12-16 02:44:37.978307] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:07.473 [2024-12-16 02:44:37.978718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe600c0 (107): Transport endpoint is not connected 00:23:07.473 [2024-12-16 02:44:37.979711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe600c0 (9): Bad file descriptor 00:23:07.473 [2024-12-16 02:44:37.980713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:07.473 [2024-12-16 02:44:37.980723] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:07.473 [2024-12-16 02:44:37.980729] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:07.473 [2024-12-16 02:44:37.980738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:07.473 request: 00:23:07.473 { 00:23:07.473 "name": "TLSTEST", 00:23:07.473 "trtype": "tcp", 00:23:07.473 "traddr": "10.0.0.2", 00:23:07.473 "adrfam": "ipv4", 00:23:07.473 "trsvcid": "4420", 00:23:07.473 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.473 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:07.473 "prchk_reftag": false, 00:23:07.473 "prchk_guard": false, 00:23:07.473 "hdgst": false, 00:23:07.473 "ddgst": false, 00:23:07.473 "psk": "key0", 00:23:07.473 "allow_unrecognized_csi": false, 00:23:07.473 "method": "bdev_nvme_attach_controller", 00:23:07.473 "req_id": 1 00:23:07.473 } 00:23:07.473 Got JSON-RPC error response 00:23:07.473 response: 00:23:07.473 { 00:23:07.473 "code": -5, 00:23:07.473 "message": "Input/output error" 00:23:07.473 } 00:23:07.473 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1019160 00:23:07.473 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1019160 ']' 00:23:07.473 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1019160 00:23:07.473 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:07.473 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:07.473 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019160 00:23:07.473 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:07.473 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:07.473 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019160' 00:23:07.473 killing process with pid 1019160 00:23:07.473 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1019160 00:23:07.473 Received shutdown signal, test time was about 10.000000 seconds 00:23:07.473 00:23:07.473 Latency(us) 00:23:07.473 [2024-12-16T01:44:38.132Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:07.473 [2024-12-16T01:44:38.132Z] =================================================================================================================== 00:23:07.473 [2024-12-16T01:44:38.132Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:07.473 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1019160 00:23:07.732 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:07.732 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:07.732 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:07.732 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gwSzkocZTq 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gwSzkocZTq 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gwSzkocZTq 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.gwSzkocZTq 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1019375 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1019375 /var/tmp/bdevperf.sock 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1019375 ']' 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:07.733 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.733 [2024-12-16 02:44:38.249931] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:07.733 [2024-12-16 02:44:38.249982] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019375 ] 00:23:07.733 [2024-12-16 02:44:38.321022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.733 [2024-12-16 02:44:38.340985] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:07.991 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:07.991 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:07.991 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gwSzkocZTq 00:23:07.991 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:08.251 [2024-12-16 02:44:38.799761] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.251 [2024-12-16 02:44:38.810360] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:08.251 [2024-12-16 02:44:38.810382] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:08.251 [2024-12-16 02:44:38.810406] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:08.251 [2024-12-16 02:44:38.811015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f30c0 (107): Transport endpoint is not connected 00:23:08.251 [2024-12-16 02:44:38.812009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12f30c0 (9): Bad file descriptor 00:23:08.251 [2024-12-16 02:44:38.813011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:08.251 [2024-12-16 02:44:38.813021] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:08.251 [2024-12-16 02:44:38.813028] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:08.251 [2024-12-16 02:44:38.813036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:08.251 request: 00:23:08.251 { 00:23:08.251 "name": "TLSTEST", 00:23:08.251 "trtype": "tcp", 00:23:08.251 "traddr": "10.0.0.2", 00:23:08.251 "adrfam": "ipv4", 00:23:08.251 "trsvcid": "4420", 00:23:08.251 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:08.251 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:08.251 "prchk_reftag": false, 00:23:08.251 "prchk_guard": false, 00:23:08.251 "hdgst": false, 00:23:08.251 "ddgst": false, 00:23:08.251 "psk": "key0", 00:23:08.251 "allow_unrecognized_csi": false, 00:23:08.251 "method": "bdev_nvme_attach_controller", 00:23:08.251 "req_id": 1 00:23:08.251 } 00:23:08.251 Got JSON-RPC error response 00:23:08.251 response: 00:23:08.251 { 00:23:08.251 "code": -5, 00:23:08.251 "message": "Input/output error" 00:23:08.251 } 00:23:08.251 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1019375 00:23:08.251 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1019375 ']' 00:23:08.251 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1019375 00:23:08.251 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:08.251 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.251 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019375 00:23:08.251 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:08.251 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:08.251 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019375' 00:23:08.251 killing process with pid 1019375 00:23:08.251 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1019375 00:23:08.251 Received shutdown signal, test time was about 10.000000 seconds 00:23:08.251 00:23:08.251 Latency(us) 00:23:08.251 [2024-12-16T01:44:38.910Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:08.251 [2024-12-16T01:44:38.910Z] =================================================================================================================== 00:23:08.251 [2024-12-16T01:44:38.910Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:08.251 02:44:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1019375 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1019578 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1019578 /var/tmp/bdevperf.sock 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1019578 ']' 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:08.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:08.510 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:08.510 [2024-12-16 02:44:39.091464] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:08.510 [2024-12-16 02:44:39.091514] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019578 ] 00:23:08.510 [2024-12-16 02:44:39.166104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.768 [2024-12-16 02:44:39.186731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:08.768 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:08.768 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:08.768 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:09.027 [2024-12-16 02:44:39.445027] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:09.027 [2024-12-16 02:44:39.445060] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:09.027 request: 00:23:09.027 { 00:23:09.027 "name": "key0", 00:23:09.027 "path": "", 00:23:09.027 "method": "keyring_file_add_key", 00:23:09.027 "req_id": 1 00:23:09.027 } 00:23:09.027 Got JSON-RPC error response 00:23:09.027 response: 00:23:09.027 { 00:23:09.027 "code": -1, 00:23:09.027 "message": "Operation not permitted" 00:23:09.027 } 00:23:09.027 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:09.027 [2024-12-16 02:44:39.641617] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:09.027 [2024-12-16 02:44:39.641648] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:09.027 request: 00:23:09.027 { 00:23:09.027 "name": "TLSTEST", 00:23:09.027 "trtype": "tcp", 00:23:09.027 "traddr": "10.0.0.2", 00:23:09.027 "adrfam": "ipv4", 00:23:09.027 "trsvcid": "4420", 00:23:09.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:09.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:09.027 "prchk_reftag": false, 00:23:09.027 "prchk_guard": false, 00:23:09.027 "hdgst": false, 00:23:09.027 "ddgst": false, 00:23:09.027 "psk": "key0", 00:23:09.027 "allow_unrecognized_csi": false, 00:23:09.027 "method": "bdev_nvme_attach_controller", 00:23:09.027 "req_id": 1 00:23:09.027 } 00:23:09.027 Got JSON-RPC error response 00:23:09.027 response: 00:23:09.027 { 00:23:09.027 "code": -126, 00:23:09.027 "message": "Required key not available" 00:23:09.027 } 00:23:09.027 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1019578 00:23:09.027 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1019578 ']' 00:23:09.027 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1019578 00:23:09.027 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:09.027 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.027 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019578 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019578' 00:23:09.286 killing process with pid 1019578 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1019578 00:23:09.286 Received shutdown signal, test time was about 10.000000 seconds 00:23:09.286 00:23:09.286 Latency(us) 00:23:09.286 [2024-12-16T01:44:39.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.286 [2024-12-16T01:44:39.945Z] =================================================================================================================== 00:23:09.286 [2024-12-16T01:44:39.945Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1019578 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1015064 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1015064 ']' 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1015064 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1015064 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1015064' 00:23:09.286 killing process with pid 1015064 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1015064 00:23:09.286 02:44:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1015064 00:23:09.545 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:09.545 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:09.545 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:09.545 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.vFEfGfbcOR 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.vFEfGfbcOR 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1019642 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1019642 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1019642 ']' 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.546 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.546 [2024-12-16 02:44:40.169404] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:09.546 [2024-12-16 02:44:40.169455] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.804 [2024-12-16 02:44:40.249602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.804 [2024-12-16 02:44:40.270514] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.804 [2024-12-16 02:44:40.270550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.804 [2024-12-16 02:44:40.270558] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.804 [2024-12-16 02:44:40.270564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.804 [2024-12-16 02:44:40.270569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.804 [2024-12-16 02:44:40.271059] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.804 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.804 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:09.804 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.804 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.804 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:09.804 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.804 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.vFEfGfbcOR 00:23:09.804 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vFEfGfbcOR 00:23:09.804 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:10.063 [2024-12-16 02:44:40.585683] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:10.063 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:10.322 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:10.322 [2024-12-16 02:44:40.946607] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:10.322 [2024-12-16 02:44:40.946821] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:10.322 02:44:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:10.582 malloc0 00:23:10.582 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:10.841 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vFEfGfbcOR 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vFEfGfbcOR 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vFEfGfbcOR 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1020011 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1020011 /var/tmp/bdevperf.sock 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1020011 ']' 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.099 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.358 [2024-12-16 02:44:41.772030] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:11.358 [2024-12-16 02:44:41.772078] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1020011 ] 00:23:11.358 [2024-12-16 02:44:41.845757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.358 [2024-12-16 02:44:41.868103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.358 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.358 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:11.358 02:44:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vFEfGfbcOR 00:23:11.617 02:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:11.875 [2024-12-16 02:44:42.302951] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:11.875 TLSTESTn1 00:23:11.876 02:44:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:11.876 Running I/O for 10 seconds... 00:23:14.189 5220.00 IOPS, 20.39 MiB/s [2024-12-16T01:44:45.782Z] 5409.50 IOPS, 21.13 MiB/s [2024-12-16T01:44:46.717Z] 5366.33 IOPS, 20.96 MiB/s [2024-12-16T01:44:47.652Z] 5266.25 IOPS, 20.57 MiB/s [2024-12-16T01:44:48.587Z] 5208.20 IOPS, 20.34 MiB/s [2024-12-16T01:44:49.523Z] 5200.17 IOPS, 20.31 MiB/s [2024-12-16T01:44:50.899Z] 5150.86 IOPS, 20.12 MiB/s [2024-12-16T01:44:51.836Z] 5151.75 IOPS, 20.12 MiB/s [2024-12-16T01:44:52.773Z] 5134.89 IOPS, 20.06 MiB/s [2024-12-16T01:44:52.773Z] 5127.10 IOPS, 20.03 MiB/s 00:23:22.114 Latency(us) 00:23:22.114 [2024-12-16T01:44:52.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.114 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:22.114 Verification LBA range: start 0x0 length 0x2000 00:23:22.114 TLSTESTn1 : 10.02 5131.24 20.04 0.00 0.00 24909.37 6772.05 30208.98 00:23:22.114 [2024-12-16T01:44:52.773Z] =================================================================================================================== 00:23:22.114 [2024-12-16T01:44:52.773Z] Total : 5131.24 20.04 0.00 0.00 24909.37 6772.05 30208.98 00:23:22.114 { 00:23:22.114 "results": [ 00:23:22.114 { 00:23:22.114 "job": "TLSTESTn1", 00:23:22.114 "core_mask": "0x4", 00:23:22.114 "workload": "verify", 00:23:22.114 "status": "finished", 00:23:22.114 "verify_range": { 00:23:22.114 "start": 0, 00:23:22.114 "length": 8192 00:23:22.114 }, 00:23:22.114 "queue_depth": 128, 00:23:22.114 "io_size": 4096, 00:23:22.114 "runtime": 10.016878, 00:23:22.114 "iops": 5131.239493982057, 00:23:22.114 "mibps": 20.04390427336741, 00:23:22.114 "io_failed": 0, 00:23:22.114 "io_timeout": 0, 00:23:22.114 "avg_latency_us": 24909.373072627874, 00:23:22.114 "min_latency_us": 6772.053333333333, 00:23:22.114 "max_latency_us": 30208.975238095238 00:23:22.114 } 00:23:22.114 ], 00:23:22.114 "core_count": 1 00:23:22.114 } 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1020011 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1020011 ']' 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1020011 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1020011 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1020011' 00:23:22.114 killing process with pid 1020011 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1020011 00:23:22.114 Received shutdown signal, test time was about 10.000000 seconds 00:23:22.114 00:23:22.114 Latency(us) 00:23:22.114 [2024-12-16T01:44:52.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.114 [2024-12-16T01:44:52.773Z] =================================================================================================================== 00:23:22.114 [2024-12-16T01:44:52.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1020011 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.vFEfGfbcOR 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vFEfGfbcOR 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vFEfGfbcOR 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.vFEfGfbcOR 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.vFEfGfbcOR 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1021676 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1021676 /var/tmp/bdevperf.sock 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021676 ']' 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.114 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.373 [2024-12-16 02:44:52.799487] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:22.373 [2024-12-16 02:44:52.799537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021676 ] 00:23:22.373 [2024-12-16 02:44:52.872291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.373 [2024-12-16 02:44:52.893402] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.373 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.373 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:22.373 02:44:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vFEfGfbcOR 00:23:22.632 [2024-12-16 02:44:53.175903] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vFEfGfbcOR': 0100666 00:23:22.632 [2024-12-16 02:44:53.175934] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:22.632 request: 00:23:22.632 { 00:23:22.632 "name": "key0", 00:23:22.632 "path": "/tmp/tmp.vFEfGfbcOR", 00:23:22.632 "method": "keyring_file_add_key", 00:23:22.632 "req_id": 1 00:23:22.632 } 00:23:22.632 Got JSON-RPC error response 00:23:22.632 response: 00:23:22.632 { 00:23:22.632 "code": -1, 00:23:22.632 "message": "Operation not permitted" 00:23:22.632 } 00:23:22.632 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:22.892 [2024-12-16 02:44:53.372484] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:22.892 [2024-12-16 02:44:53.372509] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:22.892 request: 00:23:22.892 { 00:23:22.892 "name": "TLSTEST", 00:23:22.892 "trtype": "tcp", 00:23:22.892 "traddr": "10.0.0.2", 00:23:22.892 "adrfam": "ipv4", 00:23:22.892 "trsvcid": "4420", 00:23:22.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:22.892 "prchk_reftag": false, 00:23:22.892 "prchk_guard": false, 00:23:22.892 "hdgst": false, 00:23:22.892 "ddgst": false, 00:23:22.892 "psk": "key0", 00:23:22.892 "allow_unrecognized_csi": false, 00:23:22.892 "method": "bdev_nvme_attach_controller", 00:23:22.892 "req_id": 1 00:23:22.892 } 00:23:22.892 Got JSON-RPC error response 00:23:22.892 response: 00:23:22.892 { 00:23:22.892 "code": -126, 00:23:22.892 "message": "Required key not available" 00:23:22.892 } 00:23:22.892 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1021676 00:23:22.892 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021676 ']' 00:23:22.892 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021676 00:23:22.892 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:22.892 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:22.892 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021676 00:23:22.892 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:22.892 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:22.892 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021676' 00:23:22.892 killing process with pid 1021676 00:23:22.892 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021676 00:23:22.892 Received shutdown signal, test time was about 10.000000 seconds 00:23:22.892 00:23:22.892 Latency(us) 00:23:22.892 [2024-12-16T01:44:53.551Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:22.892 [2024-12-16T01:44:53.551Z] =================================================================================================================== 00:23:22.892 [2024-12-16T01:44:53.551Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:22.892 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021676 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1019642 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1019642 ']' 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1019642 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019642 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019642' 00:23:23.151 killing process with pid 1019642 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1019642 00:23:23.151 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1019642 00:23:23.410 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:23.410 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:23.410 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:23.410 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.410 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1021904 00:23:23.410 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1021904 00:23:23.410 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:23.410 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021904 ']' 00:23:23.410 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.410 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:23.410 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.410 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:23.410 02:44:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.410 [2024-12-16 02:44:53.864202] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:23.410 [2024-12-16 02:44:53.864246] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:23.410 [2024-12-16 02:44:53.942687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.410 [2024-12-16 02:44:53.963230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:23.410 [2024-12-16 02:44:53.963267] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:23.410 [2024-12-16 02:44:53.963274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:23.410 [2024-12-16 02:44:53.963280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:23.410 [2024-12-16 02:44:53.963285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:23.410 [2024-12-16 02:44:53.963768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:23.410 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:23.410 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:23.410 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:23.410 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:23.410 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.669 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:23.669 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.vFEfGfbcOR 00:23:23.669 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:23.669 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.vFEfGfbcOR 00:23:23.669 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:23.669 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.669 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:23.669 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:23.669 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.vFEfGfbcOR 00:23:23.669 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vFEfGfbcOR 00:23:23.669 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:23.669 [2024-12-16 02:44:54.273688] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:23.669 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:23.929 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:24.187 [2024-12-16 02:44:54.674727] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.187 [2024-12-16 02:44:54.674929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.187 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:24.446 malloc0 00:23:24.446 02:44:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:24.446 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vFEfGfbcOR 00:23:24.705 [2024-12-16 02:44:55.260126] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vFEfGfbcOR': 0100666 00:23:24.705 [2024-12-16 02:44:55.260151] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:24.705 request: 00:23:24.705 { 00:23:24.705 "name": "key0", 00:23:24.705 "path": "/tmp/tmp.vFEfGfbcOR", 00:23:24.705 "method": "keyring_file_add_key", 00:23:24.705 "req_id": 1 00:23:24.705 } 00:23:24.705 Got JSON-RPC error response 00:23:24.705 response: 00:23:24.705 { 00:23:24.705 "code": -1, 00:23:24.705 "message": "Operation not permitted" 00:23:24.705 } 00:23:24.705 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:24.963 [2024-12-16 02:44:55.448639] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:24.963 [2024-12-16 02:44:55.448675] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:24.963 request: 00:23:24.963 { 00:23:24.963 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.963 "host": "nqn.2016-06.io.spdk:host1", 00:23:24.963 "psk": "key0", 00:23:24.964 "method": "nvmf_subsystem_add_host", 00:23:24.964 "req_id": 1 00:23:24.964 } 00:23:24.964 Got JSON-RPC error response 00:23:24.964 response: 00:23:24.964 { 00:23:24.964 "code": -32603, 00:23:24.964 "message": "Internal error" 00:23:24.964 } 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1021904 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021904 ']' 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021904 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021904 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021904' 00:23:24.964 killing process with pid 1021904 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021904 00:23:24.964 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021904 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.vFEfGfbcOR 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1022260 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1022260 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1022260 ']' 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:25.222 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.222 [2024-12-16 02:44:55.739435] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:25.222 [2024-12-16 02:44:55.739487] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:25.222 [2024-12-16 02:44:55.818485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.222 [2024-12-16 02:44:55.839368] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:25.222 [2024-12-16 02:44:55.839404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:25.222 [2024-12-16 02:44:55.839412] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:25.222 [2024-12-16 02:44:55.839420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:25.222 [2024-12-16 02:44:55.839425] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:25.222 [2024-12-16 02:44:55.839917] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.481 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:25.481 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:25.481 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:25.481 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:25.481 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:25.481 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.481 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.vFEfGfbcOR 00:23:25.481 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vFEfGfbcOR 00:23:25.481 02:44:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:25.740 [2024-12-16 02:44:56.143347] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:25.740 02:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:25.740 02:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:25.998 [2024-12-16 02:44:56.536362] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:25.998 [2024-12-16 02:44:56.536558] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.998 02:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:26.256 malloc0 00:23:26.256 02:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:26.515 02:44:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vFEfGfbcOR 00:23:26.515 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:26.825 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1022623 00:23:26.825 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:26.825 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:26.825 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1022623 /var/tmp/bdevperf.sock 00:23:26.825 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1022623 ']' 00:23:26.825 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.825 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.825 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.825 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.825 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.825 [2024-12-16 02:44:57.407037] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:26.825 [2024-12-16 02:44:57.407090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022623 ] 00:23:27.167 [2024-12-16 02:44:57.483553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.167 [2024-12-16 02:44:57.505747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.167 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.167 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:27.167 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vFEfGfbcOR 00:23:27.167 02:44:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:27.426 [2024-12-16 02:44:57.964784] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:27.426 TLSTESTn1 00:23:27.426 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:27.685 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:27.685 "subsystems": [ 00:23:27.685 { 00:23:27.685 "subsystem": "keyring", 00:23:27.685 "config": [ 00:23:27.685 { 00:23:27.685 "method": "keyring_file_add_key", 00:23:27.685 "params": { 00:23:27.685 "name": "key0", 00:23:27.685 "path": "/tmp/tmp.vFEfGfbcOR" 00:23:27.685 } 00:23:27.685 } 00:23:27.685 ] 00:23:27.685 }, 00:23:27.685 { 00:23:27.685 "subsystem": "iobuf", 00:23:27.685 "config": [ 00:23:27.685 { 00:23:27.685 "method": "iobuf_set_options", 00:23:27.685 "params": { 00:23:27.685 "small_pool_count": 8192, 00:23:27.685 "large_pool_count": 1024, 00:23:27.685 "small_bufsize": 8192, 00:23:27.685 "large_bufsize": 135168, 00:23:27.685 "enable_numa": false 00:23:27.685 } 00:23:27.685 } 00:23:27.685 ] 00:23:27.685 }, 00:23:27.685 { 00:23:27.685 "subsystem": "sock", 00:23:27.685 "config": [ 00:23:27.685 { 00:23:27.685 "method": "sock_set_default_impl", 00:23:27.685 "params": { 00:23:27.685 "impl_name": "posix" 00:23:27.685 } 00:23:27.685 }, 00:23:27.685 { 00:23:27.685 "method": "sock_impl_set_options", 00:23:27.685 "params": { 00:23:27.685 "impl_name": "ssl", 00:23:27.685 "recv_buf_size": 4096, 00:23:27.685 "send_buf_size": 4096, 00:23:27.685 "enable_recv_pipe": true, 00:23:27.685 "enable_quickack": false, 00:23:27.685 "enable_placement_id": 0, 00:23:27.685 "enable_zerocopy_send_server": true, 00:23:27.685 "enable_zerocopy_send_client": false, 00:23:27.685 "zerocopy_threshold": 0, 00:23:27.685 "tls_version": 0, 00:23:27.685 "enable_ktls": false 00:23:27.685 } 00:23:27.685 }, 00:23:27.685 { 00:23:27.685 "method": "sock_impl_set_options", 00:23:27.685 "params": { 00:23:27.685 "impl_name": "posix", 00:23:27.685 "recv_buf_size": 2097152, 00:23:27.685 "send_buf_size": 2097152, 00:23:27.685 "enable_recv_pipe": true, 00:23:27.685 "enable_quickack": false, 00:23:27.685 "enable_placement_id": 0, 00:23:27.685 "enable_zerocopy_send_server": true, 00:23:27.685 "enable_zerocopy_send_client": false, 00:23:27.685 "zerocopy_threshold": 0, 00:23:27.685 "tls_version": 0, 00:23:27.685 "enable_ktls": false 00:23:27.685 } 00:23:27.685 } 00:23:27.685 ] 00:23:27.685 }, 00:23:27.685 { 00:23:27.685 "subsystem": "vmd", 00:23:27.685 "config": [] 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "subsystem": "accel", 00:23:27.686 "config": [ 00:23:27.686 { 00:23:27.686 "method": "accel_set_options", 00:23:27.686 "params": { 00:23:27.686 "small_cache_size": 128, 00:23:27.686 "large_cache_size": 16, 00:23:27.686 "task_count": 2048, 00:23:27.686 "sequence_count": 2048, 00:23:27.686 "buf_count": 2048 00:23:27.686 } 00:23:27.686 } 00:23:27.686 ] 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "subsystem": "bdev", 00:23:27.686 "config": [ 00:23:27.686 { 00:23:27.686 "method": "bdev_set_options", 00:23:27.686 "params": { 00:23:27.686 "bdev_io_pool_size": 65535, 00:23:27.686 "bdev_io_cache_size": 256, 00:23:27.686 "bdev_auto_examine": true, 00:23:27.686 "iobuf_small_cache_size": 128, 00:23:27.686 "iobuf_large_cache_size": 16 00:23:27.686 } 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "method": "bdev_raid_set_options", 00:23:27.686 "params": { 00:23:27.686 "process_window_size_kb": 1024, 00:23:27.686 "process_max_bandwidth_mb_sec": 0 00:23:27.686 } 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "method": "bdev_iscsi_set_options", 00:23:27.686 "params": { 00:23:27.686 "timeout_sec": 30 00:23:27.686 } 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "method": "bdev_nvme_set_options", 00:23:27.686 "params": { 00:23:27.686 "action_on_timeout": "none", 00:23:27.686 "timeout_us": 0, 00:23:27.686 "timeout_admin_us": 0, 00:23:27.686 "keep_alive_timeout_ms": 10000, 00:23:27.686 "arbitration_burst": 0, 00:23:27.686 "low_priority_weight": 0, 00:23:27.686 "medium_priority_weight": 0, 00:23:27.686 "high_priority_weight": 0, 00:23:27.686 "nvme_adminq_poll_period_us": 10000, 00:23:27.686 "nvme_ioq_poll_period_us": 0, 00:23:27.686 "io_queue_requests": 0, 00:23:27.686 "delay_cmd_submit": true, 00:23:27.686 "transport_retry_count": 4, 00:23:27.686 "bdev_retry_count": 3, 00:23:27.686 "transport_ack_timeout": 0, 00:23:27.686 "ctrlr_loss_timeout_sec": 0, 00:23:27.686 "reconnect_delay_sec": 0, 00:23:27.686 "fast_io_fail_timeout_sec": 0, 00:23:27.686 "disable_auto_failback": false, 00:23:27.686 "generate_uuids": false, 00:23:27.686 "transport_tos": 0, 00:23:27.686 "nvme_error_stat": false, 00:23:27.686 "rdma_srq_size": 0, 00:23:27.686 "io_path_stat": false, 00:23:27.686 "allow_accel_sequence": false, 00:23:27.686 "rdma_max_cq_size": 0, 00:23:27.686 "rdma_cm_event_timeout_ms": 0, 00:23:27.686 "dhchap_digests": [ 00:23:27.686 "sha256", 00:23:27.686 "sha384", 00:23:27.686 "sha512" 00:23:27.686 ], 00:23:27.686 "dhchap_dhgroups": [ 00:23:27.686 "null", 00:23:27.686 "ffdhe2048", 00:23:27.686 "ffdhe3072", 00:23:27.686 "ffdhe4096", 00:23:27.686 "ffdhe6144", 00:23:27.686 "ffdhe8192" 00:23:27.686 ], 00:23:27.686 "rdma_umr_per_io": false 00:23:27.686 } 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "method": "bdev_nvme_set_hotplug", 00:23:27.686 "params": { 00:23:27.686 "period_us": 100000, 00:23:27.686 "enable": false 00:23:27.686 } 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "method": "bdev_malloc_create", 00:23:27.686 "params": { 00:23:27.686 "name": "malloc0", 00:23:27.686 "num_blocks": 8192, 00:23:27.686 "block_size": 4096, 00:23:27.686 "physical_block_size": 4096, 00:23:27.686 "uuid": "2368f061-883d-47c9-9b44-47ed354a0fe3", 00:23:27.686 "optimal_io_boundary": 0, 00:23:27.686 "md_size": 0, 00:23:27.686 "dif_type": 0, 00:23:27.686 "dif_is_head_of_md": false, 00:23:27.686 "dif_pi_format": 0 00:23:27.686 } 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "method": "bdev_wait_for_examine" 00:23:27.686 } 00:23:27.686 ] 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "subsystem": "nbd", 00:23:27.686 "config": [] 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "subsystem": "scheduler", 00:23:27.686 "config": [ 00:23:27.686 { 00:23:27.686 "method": "framework_set_scheduler", 00:23:27.686 "params": { 00:23:27.686 "name": "static" 00:23:27.686 } 00:23:27.686 } 00:23:27.686 ] 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "subsystem": "nvmf", 00:23:27.686 "config": [ 00:23:27.686 { 00:23:27.686 "method": "nvmf_set_config", 00:23:27.686 "params": { 00:23:27.686 "discovery_filter": "match_any", 00:23:27.686 "admin_cmd_passthru": { 00:23:27.686 "identify_ctrlr": false 00:23:27.686 }, 00:23:27.686 "dhchap_digests": [ 00:23:27.686 "sha256", 00:23:27.686 "sha384", 00:23:27.686 "sha512" 00:23:27.686 ], 00:23:27.686 "dhchap_dhgroups": [ 00:23:27.686 "null", 00:23:27.686 "ffdhe2048", 00:23:27.686 "ffdhe3072", 00:23:27.686 "ffdhe4096", 00:23:27.686 "ffdhe6144", 00:23:27.686 "ffdhe8192" 00:23:27.686 ] 00:23:27.686 } 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "method": "nvmf_set_max_subsystems", 00:23:27.686 "params": { 00:23:27.686 "max_subsystems": 1024 00:23:27.686 } 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "method": "nvmf_set_crdt", 00:23:27.686 "params": { 00:23:27.686 "crdt1": 0, 00:23:27.686 "crdt2": 0, 00:23:27.686 "crdt3": 0 00:23:27.686 } 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "method": "nvmf_create_transport", 00:23:27.686 "params": { 00:23:27.686 "trtype": "TCP", 00:23:27.686 "max_queue_depth": 128, 00:23:27.686 "max_io_qpairs_per_ctrlr": 127, 00:23:27.686 "in_capsule_data_size": 4096, 00:23:27.686 "max_io_size": 131072, 00:23:27.686 "io_unit_size": 131072, 00:23:27.686 "max_aq_depth": 128, 00:23:27.686 "num_shared_buffers": 511, 00:23:27.686 "buf_cache_size": 4294967295, 00:23:27.686 "dif_insert_or_strip": false, 00:23:27.686 "zcopy": false, 00:23:27.686 "c2h_success": false, 00:23:27.686 "sock_priority": 0, 00:23:27.686 "abort_timeout_sec": 1, 00:23:27.686 "ack_timeout": 0, 00:23:27.686 "data_wr_pool_size": 0 00:23:27.686 } 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "method": "nvmf_create_subsystem", 00:23:27.686 "params": { 00:23:27.686 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.686 "allow_any_host": false, 00:23:27.686 "serial_number": "SPDK00000000000001", 00:23:27.686 "model_number": "SPDK bdev Controller", 00:23:27.686 "max_namespaces": 10, 00:23:27.686 "min_cntlid": 1, 00:23:27.686 "max_cntlid": 65519, 00:23:27.686 "ana_reporting": false 00:23:27.686 } 00:23:27.686 }, 00:23:27.686 { 00:23:27.686 "method": "nvmf_subsystem_add_host", 00:23:27.687 "params": { 00:23:27.687 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.687 "host": "nqn.2016-06.io.spdk:host1", 00:23:27.687 "psk": "key0" 00:23:27.687 } 00:23:27.687 }, 00:23:27.687 { 00:23:27.687 "method": "nvmf_subsystem_add_ns", 00:23:27.687 "params": { 00:23:27.687 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.687 "namespace": { 00:23:27.687 "nsid": 1, 00:23:27.687 "bdev_name": "malloc0", 00:23:27.687 "nguid": "2368F061883D47C99B4447ED354A0FE3", 00:23:27.687 "uuid": "2368f061-883d-47c9-9b44-47ed354a0fe3", 00:23:27.687 "no_auto_visible": false 00:23:27.687 } 00:23:27.687 } 00:23:27.687 }, 00:23:27.687 { 00:23:27.687 "method": "nvmf_subsystem_add_listener", 00:23:27.687 "params": { 00:23:27.687 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.687 "listen_address": { 00:23:27.687 "trtype": "TCP", 00:23:27.687 "adrfam": "IPv4", 00:23:27.687 "traddr": "10.0.0.2", 00:23:27.687 "trsvcid": "4420" 00:23:27.687 }, 00:23:27.687 "secure_channel": true 00:23:27.687 } 00:23:27.687 } 00:23:27.687 ] 00:23:27.687 } 00:23:27.687 ] 00:23:27.687 }' 00:23:27.687 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:27.946 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:27.946 "subsystems": [ 00:23:27.946 { 00:23:27.946 "subsystem": "keyring", 00:23:27.946 "config": [ 00:23:27.946 { 00:23:27.946 "method": "keyring_file_add_key", 00:23:27.946 "params": { 00:23:27.946 "name": "key0", 00:23:27.946 "path": "/tmp/tmp.vFEfGfbcOR" 00:23:27.946 } 00:23:27.946 } 00:23:27.946 ] 00:23:27.946 }, 00:23:27.946 { 00:23:27.946 "subsystem": "iobuf", 00:23:27.946 "config": [ 00:23:27.946 { 00:23:27.946 "method": "iobuf_set_options", 00:23:27.946 "params": { 00:23:27.946 "small_pool_count": 8192, 00:23:27.946 "large_pool_count": 1024, 00:23:27.946 "small_bufsize": 8192, 00:23:27.946 "large_bufsize": 135168, 00:23:27.946 "enable_numa": false 00:23:27.946 } 00:23:27.946 } 00:23:27.946 ] 00:23:27.946 }, 00:23:27.946 { 00:23:27.946 "subsystem": "sock", 00:23:27.946 "config": [ 00:23:27.946 { 00:23:27.946 "method": "sock_set_default_impl", 00:23:27.946 "params": { 00:23:27.946 "impl_name": "posix" 00:23:27.946 } 00:23:27.946 }, 00:23:27.946 { 00:23:27.946 "method": "sock_impl_set_options", 00:23:27.946 "params": { 00:23:27.946 "impl_name": "ssl", 00:23:27.946 "recv_buf_size": 4096, 00:23:27.946 "send_buf_size": 4096, 00:23:27.946 "enable_recv_pipe": true, 00:23:27.946 "enable_quickack": false, 00:23:27.946 "enable_placement_id": 0, 00:23:27.946 "enable_zerocopy_send_server": true, 00:23:27.946 "enable_zerocopy_send_client": false, 00:23:27.946 "zerocopy_threshold": 0, 00:23:27.946 "tls_version": 0, 00:23:27.946 "enable_ktls": false 00:23:27.946 } 00:23:27.946 }, 00:23:27.946 { 00:23:27.946 "method": "sock_impl_set_options", 00:23:27.947 "params": { 00:23:27.947 "impl_name": "posix", 00:23:27.947 "recv_buf_size": 2097152, 00:23:27.947 "send_buf_size": 2097152, 00:23:27.947 "enable_recv_pipe": true, 00:23:27.947 "enable_quickack": false, 00:23:27.947 "enable_placement_id": 0, 00:23:27.947 "enable_zerocopy_send_server": true, 00:23:27.947 "enable_zerocopy_send_client": false, 00:23:27.947 "zerocopy_threshold": 0, 00:23:27.947 "tls_version": 0, 00:23:27.947 "enable_ktls": false 00:23:27.947 } 00:23:27.947 } 00:23:27.947 ] 00:23:27.947 }, 00:23:27.947 { 00:23:27.947 "subsystem": "vmd", 00:23:27.947 "config": [] 00:23:27.947 }, 00:23:27.947 { 00:23:27.947 "subsystem": "accel", 00:23:27.947 "config": [ 00:23:27.947 { 00:23:27.947 "method": "accel_set_options", 00:23:27.947 "params": { 00:23:27.947 "small_cache_size": 128, 00:23:27.947 "large_cache_size": 16, 00:23:27.947 "task_count": 2048, 00:23:27.947 "sequence_count": 2048, 00:23:27.947 "buf_count": 2048 00:23:27.947 } 00:23:27.947 } 00:23:27.947 ] 00:23:27.947 }, 00:23:27.947 { 00:23:27.947 "subsystem": "bdev", 00:23:27.947 "config": [ 00:23:27.947 { 00:23:27.947 "method": "bdev_set_options", 00:23:27.947 "params": { 00:23:27.947 "bdev_io_pool_size": 65535, 00:23:27.947 "bdev_io_cache_size": 256, 00:23:27.947 "bdev_auto_examine": true, 00:23:27.947 "iobuf_small_cache_size": 128, 00:23:27.947 "iobuf_large_cache_size": 16 00:23:27.947 } 00:23:27.947 }, 00:23:27.947 { 00:23:27.947 "method": "bdev_raid_set_options", 00:23:27.947 "params": { 00:23:27.947 "process_window_size_kb": 1024, 00:23:27.947 "process_max_bandwidth_mb_sec": 0 00:23:27.947 } 00:23:27.947 }, 00:23:27.947 { 00:23:27.947 "method": "bdev_iscsi_set_options", 00:23:27.947 "params": { 00:23:27.947 "timeout_sec": 30 00:23:27.947 } 00:23:27.947 }, 00:23:27.947 { 00:23:27.947 "method": "bdev_nvme_set_options", 00:23:27.947 "params": { 00:23:27.947 "action_on_timeout": "none", 00:23:27.947 "timeout_us": 0, 00:23:27.947 "timeout_admin_us": 0, 00:23:27.947 "keep_alive_timeout_ms": 10000, 00:23:27.947 "arbitration_burst": 0, 00:23:27.947 "low_priority_weight": 0, 00:23:27.947 "medium_priority_weight": 0, 00:23:27.947 "high_priority_weight": 0, 00:23:27.947 "nvme_adminq_poll_period_us": 10000, 00:23:27.947 "nvme_ioq_poll_period_us": 0, 00:23:27.947 "io_queue_requests": 512, 00:23:27.947 "delay_cmd_submit": true, 00:23:27.947 "transport_retry_count": 4, 00:23:27.947 "bdev_retry_count": 3, 00:23:27.947 "transport_ack_timeout": 0, 00:23:27.947 "ctrlr_loss_timeout_sec": 0, 00:23:27.947 "reconnect_delay_sec": 0, 00:23:27.947 "fast_io_fail_timeout_sec": 0, 00:23:27.947 "disable_auto_failback": false, 00:23:27.947 "generate_uuids": false, 00:23:27.947 "transport_tos": 0, 00:23:27.947 "nvme_error_stat": false, 00:23:27.947 "rdma_srq_size": 0, 00:23:27.947 "io_path_stat": false, 00:23:27.947 "allow_accel_sequence": false, 00:23:27.947 "rdma_max_cq_size": 0, 00:23:27.947 "rdma_cm_event_timeout_ms": 0, 00:23:27.947 "dhchap_digests": [ 00:23:27.947 "sha256", 00:23:27.947 "sha384", 00:23:27.947 "sha512" 00:23:27.947 ], 00:23:27.947 "dhchap_dhgroups": [ 00:23:27.947 "null", 00:23:27.947 "ffdhe2048", 00:23:27.947 "ffdhe3072", 00:23:27.947 "ffdhe4096", 00:23:27.947 "ffdhe6144", 00:23:27.947 "ffdhe8192" 00:23:27.947 ], 00:23:27.947 "rdma_umr_per_io": false 00:23:27.947 } 00:23:27.947 }, 00:23:27.947 { 00:23:27.947 "method": "bdev_nvme_attach_controller", 00:23:27.947 "params": { 00:23:27.947 "name": "TLSTEST", 00:23:27.947 "trtype": "TCP", 00:23:27.947 "adrfam": "IPv4", 00:23:27.947 "traddr": "10.0.0.2", 00:23:27.947 "trsvcid": "4420", 00:23:27.947 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:27.947 "prchk_reftag": false, 00:23:27.947 "prchk_guard": false, 00:23:27.947 "ctrlr_loss_timeout_sec": 0, 00:23:27.947 "reconnect_delay_sec": 0, 00:23:27.947 "fast_io_fail_timeout_sec": 0, 00:23:27.947 "psk": "key0", 00:23:27.947 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:27.947 "hdgst": false, 00:23:27.947 "ddgst": false, 00:23:27.947 "multipath": "multipath" 00:23:27.947 } 00:23:27.947 }, 00:23:27.947 { 00:23:27.947 "method": "bdev_nvme_set_hotplug", 00:23:27.947 "params": { 00:23:27.947 "period_us": 100000, 00:23:27.947 "enable": false 00:23:27.947 } 00:23:27.947 }, 00:23:27.947 { 00:23:27.947 "method": "bdev_wait_for_examine" 00:23:27.947 } 00:23:27.947 ] 00:23:27.947 }, 00:23:27.947 { 00:23:27.947 "subsystem": "nbd", 00:23:27.947 "config": [] 00:23:27.947 } 00:23:27.947 ] 00:23:27.947 }' 00:23:27.947 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1022623 00:23:27.947 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1022623 ']' 00:23:27.947 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1022623 00:23:27.947 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:27.947 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:27.947 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1022623 00:23:28.206 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1022623' 00:23:28.207 killing process with pid 1022623 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1022623 00:23:28.207 Received shutdown signal, test time was about 10.000000 seconds 00:23:28.207 00:23:28.207 Latency(us) 00:23:28.207 [2024-12-16T01:44:58.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:28.207 [2024-12-16T01:44:58.866Z] =================================================================================================================== 00:23:28.207 [2024-12-16T01:44:58.866Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1022623 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1022260 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1022260 ']' 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1022260 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1022260 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1022260' 00:23:28.207 killing process with pid 1022260 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1022260 00:23:28.207 02:44:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1022260 00:23:28.466 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:28.466 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:28.466 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:28.466 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:28.466 "subsystems": [ 00:23:28.466 { 00:23:28.466 "subsystem": "keyring", 00:23:28.466 "config": [ 00:23:28.466 { 00:23:28.466 "method": "keyring_file_add_key", 00:23:28.466 "params": { 00:23:28.466 "name": "key0", 00:23:28.466 "path": "/tmp/tmp.vFEfGfbcOR" 00:23:28.466 } 00:23:28.466 } 00:23:28.466 ] 00:23:28.466 }, 00:23:28.466 { 00:23:28.466 "subsystem": "iobuf", 00:23:28.466 "config": [ 00:23:28.466 { 00:23:28.466 "method": "iobuf_set_options", 00:23:28.466 "params": { 00:23:28.466 "small_pool_count": 8192, 00:23:28.466 "large_pool_count": 1024, 00:23:28.466 "small_bufsize": 8192, 00:23:28.466 "large_bufsize": 135168, 00:23:28.466 "enable_numa": false 00:23:28.466 } 00:23:28.466 } 00:23:28.466 ] 00:23:28.466 }, 00:23:28.466 { 00:23:28.466 "subsystem": "sock", 00:23:28.466 "config": [ 00:23:28.466 { 00:23:28.466 "method": "sock_set_default_impl", 00:23:28.466 "params": { 00:23:28.466 "impl_name": "posix" 00:23:28.466 } 00:23:28.466 }, 00:23:28.466 { 00:23:28.466 "method": "sock_impl_set_options", 00:23:28.466 "params": { 00:23:28.466 "impl_name": "ssl", 00:23:28.466 "recv_buf_size": 4096, 00:23:28.467 "send_buf_size": 4096, 00:23:28.467 "enable_recv_pipe": true, 00:23:28.467 "enable_quickack": false, 00:23:28.467 "enable_placement_id": 0, 00:23:28.467 "enable_zerocopy_send_server": true, 00:23:28.467 "enable_zerocopy_send_client": false, 00:23:28.467 "zerocopy_threshold": 0, 00:23:28.467 "tls_version": 0, 00:23:28.467 "enable_ktls": false 00:23:28.467 } 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "method": "sock_impl_set_options", 00:23:28.467 "params": { 00:23:28.467 "impl_name": "posix", 00:23:28.467 "recv_buf_size": 2097152, 00:23:28.467 "send_buf_size": 2097152, 00:23:28.467 "enable_recv_pipe": true, 00:23:28.467 "enable_quickack": false, 00:23:28.467 "enable_placement_id": 0, 00:23:28.467 "enable_zerocopy_send_server": true, 00:23:28.467 "enable_zerocopy_send_client": false, 00:23:28.467 "zerocopy_threshold": 0, 00:23:28.467 "tls_version": 0, 00:23:28.467 "enable_ktls": false 00:23:28.467 } 00:23:28.467 } 00:23:28.467 ] 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "subsystem": "vmd", 00:23:28.467 "config": [] 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "subsystem": "accel", 00:23:28.467 "config": [ 00:23:28.467 { 00:23:28.467 "method": "accel_set_options", 00:23:28.467 "params": { 00:23:28.467 "small_cache_size": 128, 00:23:28.467 "large_cache_size": 16, 00:23:28.467 "task_count": 2048, 00:23:28.467 "sequence_count": 2048, 00:23:28.467 "buf_count": 2048 00:23:28.467 } 00:23:28.467 } 00:23:28.467 ] 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "subsystem": "bdev", 00:23:28.467 "config": [ 00:23:28.467 { 00:23:28.467 "method": "bdev_set_options", 00:23:28.467 "params": { 00:23:28.467 "bdev_io_pool_size": 65535, 00:23:28.467 "bdev_io_cache_size": 256, 00:23:28.467 "bdev_auto_examine": true, 00:23:28.467 "iobuf_small_cache_size": 128, 00:23:28.467 "iobuf_large_cache_size": 16 00:23:28.467 } 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "method": "bdev_raid_set_options", 00:23:28.467 "params": { 00:23:28.467 "process_window_size_kb": 1024, 00:23:28.467 "process_max_bandwidth_mb_sec": 0 00:23:28.467 } 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "method": "bdev_iscsi_set_options", 00:23:28.467 "params": { 00:23:28.467 "timeout_sec": 30 00:23:28.467 } 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "method": "bdev_nvme_set_options", 00:23:28.467 "params": { 00:23:28.467 "action_on_timeout": "none", 00:23:28.467 "timeout_us": 0, 00:23:28.467 "timeout_admin_us": 0, 00:23:28.467 "keep_alive_timeout_ms": 10000, 00:23:28.467 "arbitration_burst": 0, 00:23:28.467 "low_priority_weight": 0, 00:23:28.467 "medium_priority_weight": 0, 00:23:28.467 "high_priority_weight": 0, 00:23:28.467 "nvme_adminq_poll_period_us": 10000, 00:23:28.467 "nvme_ioq_poll_period_us": 0, 00:23:28.467 "io_queue_requests": 0, 00:23:28.467 "delay_cmd_submit": true, 00:23:28.467 "transport_retry_count": 4, 00:23:28.467 "bdev_retry_count": 3, 00:23:28.467 "transport_ack_timeout": 0, 00:23:28.467 "ctrlr_loss_timeout_sec": 0, 00:23:28.467 "reconnect_delay_sec": 0, 00:23:28.467 "fast_io_fail_timeout_sec": 0, 00:23:28.467 "disable_auto_failback": false, 00:23:28.467 "generate_uuids": false, 00:23:28.467 "transport_tos": 0, 00:23:28.467 "nvme_error_stat": false, 00:23:28.467 "rdma_srq_size": 0, 00:23:28.467 "io_path_stat": false, 00:23:28.467 "allow_accel_sequence": false, 00:23:28.467 "rdma_max_cq_size": 0, 00:23:28.467 "rdma_cm_event_timeout_ms": 0, 00:23:28.467 "dhchap_digests": [ 00:23:28.467 "sha256", 00:23:28.467 "sha384", 00:23:28.467 "sha512" 00:23:28.467 ], 00:23:28.467 "dhchap_dhgroups": [ 00:23:28.467 "null", 00:23:28.467 "ffdhe2048", 00:23:28.467 "ffdhe3072", 00:23:28.467 "ffdhe4096", 00:23:28.467 "ffdhe6144", 00:23:28.467 "ffdhe8192" 00:23:28.467 ], 00:23:28.467 "rdma_umr_per_io": false 00:23:28.467 } 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "method": "bdev_nvme_set_hotplug", 00:23:28.467 "params": { 00:23:28.467 "period_us": 100000, 00:23:28.467 "enable": false 00:23:28.467 } 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "method": "bdev_malloc_create", 00:23:28.467 "params": { 00:23:28.467 "name": "malloc0", 00:23:28.467 "num_blocks": 8192, 00:23:28.467 "block_size": 4096, 00:23:28.467 "physical_block_size": 4096, 00:23:28.467 "uuid": "2368f061-883d-47c9-9b44-47ed354a0fe3", 00:23:28.467 "optimal_io_boundary": 0, 00:23:28.467 "md_size": 0, 00:23:28.467 "dif_type": 0, 00:23:28.467 "dif_is_head_of_md": false, 00:23:28.467 "dif_pi_format": 0 00:23:28.467 } 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "method": "bdev_wait_for_examine" 00:23:28.467 } 00:23:28.467 ] 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "subsystem": "nbd", 00:23:28.467 "config": [] 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "subsystem": "scheduler", 00:23:28.467 "config": [ 00:23:28.467 { 00:23:28.467 "method": "framework_set_scheduler", 00:23:28.467 "params": { 00:23:28.467 "name": "static" 00:23:28.467 } 00:23:28.467 } 00:23:28.467 ] 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "subsystem": "nvmf", 00:23:28.467 "config": [ 00:23:28.467 { 00:23:28.467 "method": "nvmf_set_config", 00:23:28.467 "params": { 00:23:28.467 "discovery_filter": "match_any", 00:23:28.467 "admin_cmd_passthru": { 00:23:28.467 "identify_ctrlr": false 00:23:28.467 }, 00:23:28.467 "dhchap_digests": [ 00:23:28.467 "sha256", 00:23:28.467 "sha384", 00:23:28.467 "sha512" 00:23:28.467 ], 00:23:28.467 "dhchap_dhgroups": [ 00:23:28.467 "null", 00:23:28.467 "ffdhe2048", 00:23:28.467 "ffdhe3072", 00:23:28.467 "ffdhe4096", 00:23:28.467 "ffdhe6144", 00:23:28.467 "ffdhe8192" 00:23:28.467 ] 00:23:28.467 } 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "method": "nvmf_set_max_subsystems", 00:23:28.467 "params": { 00:23:28.467 "max_subsystems": 1024 00:23:28.467 } 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "method": "nvmf_set_crdt", 00:23:28.467 "params": { 00:23:28.467 "crdt1": 0, 00:23:28.467 "crdt2": 0, 00:23:28.467 "crdt3": 0 00:23:28.467 } 00:23:28.467 }, 00:23:28.467 { 00:23:28.467 "method": "nvmf_create_transport", 00:23:28.467 "params": { 00:23:28.467 "trtype": "TCP", 00:23:28.467 "max_queue_depth": 128, 00:23:28.467 "max_io_qpairs_per_ctrlr": 127, 00:23:28.467 "in_capsule_data_size": 4096, 00:23:28.467 "max_io_size": 131072, 00:23:28.467 "io_unit_size": 131072, 00:23:28.467 "max_aq_depth": 128, 00:23:28.467 "num_shared_buffers": 511, 00:23:28.467 "buf_cache_size": 4294967295, 00:23:28.467 "dif_insert_or_strip": false, 00:23:28.467 "zcopy": false, 00:23:28.467 "c2h_success": false, 00:23:28.467 "sock_priority": 0, 00:23:28.467 "abort_timeout_sec": 1, 00:23:28.467 "ack_timeout": 0, 00:23:28.467 "data_wr_pool_size": 0 00:23:28.467 } 00:23:28.468 }, 00:23:28.468 { 00:23:28.468 "method": "nvmf_create_subsystem", 00:23:28.468 "params": { 00:23:28.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.468 "allow_any_host": false, 00:23:28.468 "serial_number": "SPDK00000000000001", 00:23:28.468 "model_number": "SPDK bdev Controller", 00:23:28.468 "max_namespaces": 10, 00:23:28.468 "min_cntlid": 1, 00:23:28.468 "max_cntlid": 65519, 00:23:28.468 "ana_reporting": false 00:23:28.468 } 00:23:28.468 }, 00:23:28.468 { 00:23:28.468 "method": "nvmf_subsystem_add_host", 00:23:28.468 "params": { 00:23:28.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.468 "host": "nqn.2016-06.io.spdk:host1", 00:23:28.468 "psk": "key0" 00:23:28.468 } 00:23:28.468 }, 00:23:28.468 { 00:23:28.468 "method": "nvmf_subsystem_add_ns", 00:23:28.468 "params": { 00:23:28.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.468 "namespace": { 00:23:28.468 "nsid": 1, 00:23:28.468 "bdev_name": "malloc0", 00:23:28.468 "nguid": "2368F061883D47C99B4447ED354A0FE3", 00:23:28.468 "uuid": "2368f061-883d-47c9-9b44-47ed354a0fe3", 00:23:28.468 "no_auto_visible": false 00:23:28.468 } 00:23:28.468 } 00:23:28.468 }, 00:23:28.468 { 00:23:28.468 "method": "nvmf_subsystem_add_listener", 00:23:28.468 "params": { 00:23:28.468 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:28.468 "listen_address": { 00:23:28.468 "trtype": "TCP", 00:23:28.468 "adrfam": "IPv4", 00:23:28.468 "traddr": "10.0.0.2", 00:23:28.468 "trsvcid": "4420" 00:23:28.468 }, 00:23:28.468 "secure_channel": true 00:23:28.468 } 00:23:28.468 } 00:23:28.468 ] 00:23:28.468 } 00:23:28.468 ] 00:23:28.468 }' 00:23:28.468 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.468 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1022876 00:23:28.468 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:28.468 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1022876 00:23:28.468 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1022876 ']' 00:23:28.468 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:28.468 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:28.468 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:28.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:28.468 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:28.468 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:28.468 [2024-12-16 02:44:59.058501] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:28.468 [2024-12-16 02:44:59.058543] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.727 [2024-12-16 02:44:59.134260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.727 [2024-12-16 02:44:59.154801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.727 [2024-12-16 02:44:59.154837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.727 [2024-12-16 02:44:59.154844] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.727 [2024-12-16 02:44:59.154854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.727 [2024-12-16 02:44:59.154859] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.727 [2024-12-16 02:44:59.155357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.727 [2024-12-16 02:44:59.361826] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.986 [2024-12-16 02:44:59.393864] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:28.986 [2024-12-16 02:44:59.394054] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.245 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:29.245 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:29.245 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:29.245 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:29.245 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.504 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.504 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1023041 00:23:29.504 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1023041 /var/tmp/bdevperf.sock 00:23:29.504 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1023041 ']' 00:23:29.504 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.504 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:29.504 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.504 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.504 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:29.504 "subsystems": [ 00:23:29.504 { 00:23:29.504 "subsystem": "keyring", 00:23:29.504 "config": [ 00:23:29.504 { 00:23:29.504 "method": "keyring_file_add_key", 00:23:29.504 "params": { 00:23:29.504 "name": "key0", 00:23:29.504 "path": "/tmp/tmp.vFEfGfbcOR" 00:23:29.504 } 00:23:29.504 } 00:23:29.504 ] 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "subsystem": "iobuf", 00:23:29.504 "config": [ 00:23:29.504 { 00:23:29.504 "method": "iobuf_set_options", 00:23:29.504 "params": { 00:23:29.504 "small_pool_count": 8192, 00:23:29.504 "large_pool_count": 1024, 00:23:29.504 "small_bufsize": 8192, 00:23:29.504 "large_bufsize": 135168, 00:23:29.504 "enable_numa": false 00:23:29.504 } 00:23:29.504 } 00:23:29.504 ] 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "subsystem": "sock", 00:23:29.504 "config": [ 00:23:29.504 { 00:23:29.504 "method": "sock_set_default_impl", 00:23:29.504 "params": { 00:23:29.504 "impl_name": "posix" 00:23:29.504 } 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "method": "sock_impl_set_options", 00:23:29.504 "params": { 00:23:29.504 "impl_name": "ssl", 00:23:29.504 "recv_buf_size": 4096, 00:23:29.504 "send_buf_size": 4096, 00:23:29.504 "enable_recv_pipe": true, 00:23:29.504 "enable_quickack": false, 00:23:29.504 "enable_placement_id": 0, 00:23:29.504 "enable_zerocopy_send_server": true, 00:23:29.504 "enable_zerocopy_send_client": false, 00:23:29.504 "zerocopy_threshold": 0, 00:23:29.504 "tls_version": 0, 00:23:29.504 "enable_ktls": false 00:23:29.504 } 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "method": "sock_impl_set_options", 00:23:29.504 "params": { 00:23:29.504 "impl_name": "posix", 00:23:29.504 "recv_buf_size": 2097152, 00:23:29.504 "send_buf_size": 2097152, 00:23:29.504 "enable_recv_pipe": true, 00:23:29.504 "enable_quickack": false, 00:23:29.504 "enable_placement_id": 0, 00:23:29.504 "enable_zerocopy_send_server": true, 00:23:29.504 "enable_zerocopy_send_client": false, 00:23:29.504 "zerocopy_threshold": 0, 00:23:29.504 "tls_version": 0, 00:23:29.504 "enable_ktls": false 00:23:29.504 } 00:23:29.504 } 00:23:29.504 ] 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "subsystem": "vmd", 00:23:29.504 "config": [] 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "subsystem": "accel", 00:23:29.504 "config": [ 00:23:29.504 { 00:23:29.504 "method": "accel_set_options", 00:23:29.504 "params": { 00:23:29.504 "small_cache_size": 128, 00:23:29.504 "large_cache_size": 16, 00:23:29.504 "task_count": 2048, 00:23:29.504 "sequence_count": 2048, 00:23:29.504 "buf_count": 2048 00:23:29.504 } 00:23:29.504 } 00:23:29.504 ] 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "subsystem": "bdev", 00:23:29.504 "config": [ 00:23:29.504 { 00:23:29.504 "method": "bdev_set_options", 00:23:29.504 "params": { 00:23:29.504 "bdev_io_pool_size": 65535, 00:23:29.504 "bdev_io_cache_size": 256, 00:23:29.504 "bdev_auto_examine": true, 00:23:29.504 "iobuf_small_cache_size": 128, 00:23:29.504 "iobuf_large_cache_size": 16 00:23:29.504 } 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "method": "bdev_raid_set_options", 00:23:29.504 "params": { 00:23:29.504 "process_window_size_kb": 1024, 00:23:29.504 "process_max_bandwidth_mb_sec": 0 00:23:29.504 } 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "method": "bdev_iscsi_set_options", 00:23:29.504 "params": { 00:23:29.504 "timeout_sec": 30 00:23:29.504 } 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "method": "bdev_nvme_set_options", 00:23:29.504 "params": { 00:23:29.504 "action_on_timeout": "none", 00:23:29.504 "timeout_us": 0, 00:23:29.504 "timeout_admin_us": 0, 00:23:29.504 "keep_alive_timeout_ms": 10000, 00:23:29.504 "arbitration_burst": 0, 00:23:29.504 "low_priority_weight": 0, 00:23:29.504 "medium_priority_weight": 0, 00:23:29.504 "high_priority_weight": 0, 00:23:29.504 "nvme_adminq_poll_period_us": 10000, 00:23:29.504 "nvme_ioq_poll_period_us": 0, 00:23:29.504 "io_queue_requests": 512, 00:23:29.504 "delay_cmd_submit": true, 00:23:29.504 "transport_retry_count": 4, 00:23:29.504 "bdev_retry_count": 3, 00:23:29.504 "transport_ack_timeout": 0, 00:23:29.504 "ctrlr_loss_timeout_sec": 0, 00:23:29.504 "reconnect_delay_sec": 0, 00:23:29.504 "fast_io_fail_timeout_sec": 0, 00:23:29.504 "disable_auto_failback": false, 00:23:29.504 "generate_uuids": false, 00:23:29.504 "transport_tos": 0, 00:23:29.504 "nvme_error_stat": false, 00:23:29.504 "rdma_srq_size": 0, 00:23:29.504 "io_path_stat": false, 00:23:29.504 "allow_accel_sequence": false, 00:23:29.504 "rdma_max_cq_size": 0, 00:23:29.504 "rdma_cm_event_timeout_ms": 0, 00:23:29.504 "dhchap_digests": [ 00:23:29.504 "sha256", 00:23:29.504 "sha384", 00:23:29.504 "sha512" 00:23:29.504 ], 00:23:29.504 "dhchap_dhgroups": [ 00:23:29.504 "null", 00:23:29.504 "ffdhe2048", 00:23:29.504 "ffdhe3072", 00:23:29.504 "ffdhe4096", 00:23:29.504 "ffdhe6144", 00:23:29.504 "ffdhe8192" 00:23:29.504 ], 00:23:29.504 "rdma_umr_per_io": false 00:23:29.504 } 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "method": "bdev_nvme_attach_controller", 00:23:29.504 "params": { 00:23:29.504 "name": "TLSTEST", 00:23:29.504 "trtype": "TCP", 00:23:29.504 "adrfam": "IPv4", 00:23:29.504 "traddr": "10.0.0.2", 00:23:29.504 "trsvcid": "4420", 00:23:29.504 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:29.504 "prchk_reftag": false, 00:23:29.504 "prchk_guard": false, 00:23:29.504 "ctrlr_loss_timeout_sec": 0, 00:23:29.504 "reconnect_delay_sec": 0, 00:23:29.504 "fast_io_fail_timeout_sec": 0, 00:23:29.504 "psk": "key0", 00:23:29.504 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:29.504 "hdgst": false, 00:23:29.504 "ddgst": false, 00:23:29.504 "multipath": "multipath" 00:23:29.504 } 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "method": "bdev_nvme_set_hotplug", 00:23:29.504 "params": { 00:23:29.504 "period_us": 100000, 00:23:29.504 "enable": false 00:23:29.504 } 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "method": "bdev_wait_for_examine" 00:23:29.504 } 00:23:29.504 ] 00:23:29.504 }, 00:23:29.504 { 00:23:29.504 "subsystem": "nbd", 00:23:29.504 "config": [] 00:23:29.504 } 00:23:29.504 ] 00:23:29.504 }' 00:23:29.504 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.504 02:44:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:29.504 [2024-12-16 02:44:59.967141] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:29.505 [2024-12-16 02:44:59.967193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023041 ] 00:23:29.505 [2024-12-16 02:45:00.045251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.505 [2024-12-16 02:45:00.068956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:29.764 [2024-12-16 02:45:00.217658] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.332 02:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.332 02:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:30.332 02:45:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:30.332 Running I/O for 10 seconds... 00:23:32.646 5311.00 IOPS, 20.75 MiB/s [2024-12-16T01:45:04.241Z] 5463.50 IOPS, 21.34 MiB/s [2024-12-16T01:45:05.175Z] 5509.00 IOPS, 21.52 MiB/s [2024-12-16T01:45:06.109Z] 5532.00 IOPS, 21.61 MiB/s [2024-12-16T01:45:07.043Z] 5542.40 IOPS, 21.65 MiB/s [2024-12-16T01:45:07.979Z] 5544.83 IOPS, 21.66 MiB/s [2024-12-16T01:45:08.914Z] 5548.14 IOPS, 21.67 MiB/s [2024-12-16T01:45:10.290Z] 5540.88 IOPS, 21.64 MiB/s [2024-12-16T01:45:11.227Z] 5559.78 IOPS, 21.72 MiB/s [2024-12-16T01:45:11.227Z] 5561.00 IOPS, 21.72 MiB/s 00:23:40.568 Latency(us) 00:23:40.568 [2024-12-16T01:45:11.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.568 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:40.568 Verification LBA range: start 0x0 length 0x2000 00:23:40.568 TLSTESTn1 : 10.02 5565.19 21.74 0.00 0.00 22961.90 4743.56 50681.17 00:23:40.568 [2024-12-16T01:45:11.227Z] =================================================================================================================== 00:23:40.568 [2024-12-16T01:45:11.227Z] Total : 5565.19 21.74 0.00 0.00 22961.90 4743.56 50681.17 00:23:40.568 { 00:23:40.568 "results": [ 00:23:40.568 { 00:23:40.568 "job": "TLSTESTn1", 00:23:40.568 "core_mask": "0x4", 00:23:40.568 "workload": "verify", 00:23:40.568 "status": "finished", 00:23:40.568 "verify_range": { 00:23:40.568 "start": 0, 00:23:40.568 "length": 8192 00:23:40.568 }, 00:23:40.568 "queue_depth": 128, 00:23:40.568 "io_size": 4096, 00:23:40.568 "runtime": 10.015464, 00:23:40.568 "iops": 5565.193984023107, 00:23:40.568 "mibps": 21.739039000090262, 00:23:40.568 "io_failed": 0, 00:23:40.568 "io_timeout": 0, 00:23:40.568 "avg_latency_us": 22961.902671580814, 00:23:40.568 "min_latency_us": 4743.558095238095, 00:23:40.568 "max_latency_us": 50681.17333333333 00:23:40.568 } 00:23:40.568 ], 00:23:40.568 "core_count": 1 00:23:40.568 } 00:23:40.568 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:40.568 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1023041 00:23:40.568 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1023041 ']' 00:23:40.568 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1023041 00:23:40.568 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.568 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.568 02:45:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1023041 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1023041' 00:23:40.568 killing process with pid 1023041 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1023041 00:23:40.568 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.568 00:23:40.568 Latency(us) 00:23:40.568 [2024-12-16T01:45:11.227Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.568 [2024-12-16T01:45:11.227Z] =================================================================================================================== 00:23:40.568 [2024-12-16T01:45:11.227Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1023041 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1022876 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1022876 ']' 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1022876 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1022876 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1022876' 00:23:40.568 killing process with pid 1022876 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1022876 00:23:40.568 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1022876 00:23:40.827 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:40.827 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.827 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.827 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.827 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1025423 00:23:40.827 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1025423 00:23:40.827 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:40.827 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1025423 ']' 00:23:40.827 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.827 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.827 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.827 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.827 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.827 [2024-12-16 02:45:11.416910] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:40.827 [2024-12-16 02:45:11.416952] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.087 [2024-12-16 02:45:11.491790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.087 [2024-12-16 02:45:11.512902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.087 [2024-12-16 02:45:11.512936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.087 [2024-12-16 02:45:11.512943] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.087 [2024-12-16 02:45:11.512949] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.087 [2024-12-16 02:45:11.512954] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.087 [2024-12-16 02:45:11.513442] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.087 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.087 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:41.087 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.087 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.087 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.087 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.087 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.vFEfGfbcOR 00:23:41.087 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.vFEfGfbcOR 00:23:41.087 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:41.345 [2024-12-16 02:45:11.811740] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.345 02:45:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:41.604 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:41.604 [2024-12-16 02:45:12.176704] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.604 [2024-12-16 02:45:12.176930] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.604 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:41.862 malloc0 00:23:41.862 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:42.120 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.vFEfGfbcOR 00:23:42.120 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:42.379 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:42.379 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1025671 00:23:42.379 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:42.379 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1025671 /var/tmp/bdevperf.sock 00:23:42.379 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1025671 ']' 00:23:42.379 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:42.379 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:42.379 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:42.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:42.379 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:42.379 02:45:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:42.379 [2024-12-16 02:45:12.945665] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:42.379 [2024-12-16 02:45:12.945710] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025671 ] 00:23:42.379 [2024-12-16 02:45:13.018306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:42.637 [2024-12-16 02:45:13.040953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.637 02:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.637 02:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:42.637 02:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vFEfGfbcOR 00:23:42.894 02:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:42.894 [2024-12-16 02:45:13.463970] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.894 nvme0n1 00:23:42.894 02:45:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:43.153 Running I/O for 1 seconds... 00:23:44.089 5454.00 IOPS, 21.30 MiB/s 00:23:44.089 Latency(us) 00:23:44.089 [2024-12-16T01:45:14.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.089 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:44.089 Verification LBA range: start 0x0 length 0x2000 00:23:44.089 nvme0n1 : 1.01 5507.84 21.51 0.00 0.00 23082.50 4899.60 23842.62 00:23:44.089 [2024-12-16T01:45:14.748Z] =================================================================================================================== 00:23:44.089 [2024-12-16T01:45:14.748Z] Total : 5507.84 21.51 0.00 0.00 23082.50 4899.60 23842.62 00:23:44.089 { 00:23:44.089 "results": [ 00:23:44.089 { 00:23:44.089 "job": "nvme0n1", 00:23:44.089 "core_mask": "0x2", 00:23:44.089 "workload": "verify", 00:23:44.089 "status": "finished", 00:23:44.089 "verify_range": { 00:23:44.089 "start": 0, 00:23:44.089 "length": 8192 00:23:44.089 }, 00:23:44.089 "queue_depth": 128, 00:23:44.089 "io_size": 4096, 00:23:44.089 "runtime": 1.013465, 00:23:44.089 "iops": 5507.836975129876, 00:23:44.089 "mibps": 21.514988184101078, 00:23:44.089 "io_failed": 0, 00:23:44.089 "io_timeout": 0, 00:23:44.089 "avg_latency_us": 23082.49571172647, 00:23:44.089 "min_latency_us": 4899.596190476191, 00:23:44.089 "max_latency_us": 23842.620952380952 00:23:44.089 } 00:23:44.089 ], 00:23:44.089 "core_count": 1 00:23:44.089 } 00:23:44.089 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1025671 00:23:44.089 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1025671 ']' 00:23:44.089 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1025671 00:23:44.089 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.089 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.089 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025671 00:23:44.089 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:44.089 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:44.089 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025671' 00:23:44.089 killing process with pid 1025671 00:23:44.089 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1025671 00:23:44.089 Received shutdown signal, test time was about 1.000000 seconds 00:23:44.089 00:23:44.089 Latency(us) 00:23:44.089 [2024-12-16T01:45:14.748Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:44.089 [2024-12-16T01:45:14.748Z] =================================================================================================================== 00:23:44.089 [2024-12-16T01:45:14.748Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:44.089 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1025671 00:23:44.347 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1025423 00:23:44.347 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1025423 ']' 00:23:44.347 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1025423 00:23:44.347 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:44.347 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:44.347 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025423 00:23:44.347 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:44.347 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:44.347 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025423' 00:23:44.347 killing process with pid 1025423 00:23:44.347 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1025423 00:23:44.347 02:45:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1025423 00:23:44.605 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:44.605 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:44.605 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:44.605 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.605 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1025917 00:23:44.605 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:44.606 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1025917 00:23:44.606 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1025917 ']' 00:23:44.606 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.606 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.606 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.606 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.606 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.606 [2024-12-16 02:45:15.144062] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:44.606 [2024-12-16 02:45:15.144107] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.606 [2024-12-16 02:45:15.220948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.606 [2024-12-16 02:45:15.242098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.606 [2024-12-16 02:45:15.242135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.606 [2024-12-16 02:45:15.242143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.606 [2024-12-16 02:45:15.242149] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.606 [2024-12-16 02:45:15.242154] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.606 [2024-12-16 02:45:15.242636] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.864 [2024-12-16 02:45:15.373000] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:44.864 malloc0 00:23:44.864 [2024-12-16 02:45:15.400947] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:44.864 [2024-12-16 02:45:15.401144] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1026112 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1026112 /var/tmp/bdevperf.sock 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1026112 ']' 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:44.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:44.864 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.864 [2024-12-16 02:45:15.468224] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:44.864 [2024-12-16 02:45:15.468264] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026112 ] 00:23:45.122 [2024-12-16 02:45:15.543668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.122 [2024-12-16 02:45:15.566025] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.122 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.122 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:45.122 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.vFEfGfbcOR 00:23:45.380 02:45:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:45.380 [2024-12-16 02:45:16.010058] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:45.639 nvme0n1 00:23:45.639 02:45:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:45.639 Running I/O for 1 seconds... 00:23:46.576 5304.00 IOPS, 20.72 MiB/s 00:23:46.576 Latency(us) 00:23:46.576 [2024-12-16T01:45:17.235Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.576 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:46.576 Verification LBA range: start 0x0 length 0x2000 00:23:46.576 nvme0n1 : 1.02 5345.83 20.88 0.00 0.00 23760.50 5804.62 32206.26 00:23:46.576 [2024-12-16T01:45:17.235Z] =================================================================================================================== 00:23:46.576 [2024-12-16T01:45:17.235Z] Total : 5345.83 20.88 0.00 0.00 23760.50 5804.62 32206.26 00:23:46.576 { 00:23:46.576 "results": [ 00:23:46.576 { 00:23:46.576 "job": "nvme0n1", 00:23:46.576 "core_mask": "0x2", 00:23:46.576 "workload": "verify", 00:23:46.576 "status": "finished", 00:23:46.576 "verify_range": { 00:23:46.576 "start": 0, 00:23:46.576 "length": 8192 00:23:46.576 }, 00:23:46.576 "queue_depth": 128, 00:23:46.576 "io_size": 4096, 00:23:46.576 "runtime": 1.01612, 00:23:46.576 "iops": 5345.825296224855, 00:23:46.576 "mibps": 20.88213006337834, 00:23:46.576 "io_failed": 0, 00:23:46.576 "io_timeout": 0, 00:23:46.576 "avg_latency_us": 23760.499872361317, 00:23:46.576 "min_latency_us": 5804.617142857142, 00:23:46.576 "max_latency_us": 32206.262857142858 00:23:46.576 } 00:23:46.576 ], 00:23:46.576 "core_count": 1 00:23:46.576 } 00:23:46.835 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:46.835 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.835 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:46.835 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.835 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:46.835 "subsystems": [ 00:23:46.835 { 00:23:46.835 "subsystem": "keyring", 00:23:46.835 "config": [ 00:23:46.835 { 00:23:46.835 "method": "keyring_file_add_key", 00:23:46.835 "params": { 00:23:46.835 "name": "key0", 00:23:46.835 "path": "/tmp/tmp.vFEfGfbcOR" 00:23:46.835 } 00:23:46.835 } 00:23:46.835 ] 00:23:46.835 }, 00:23:46.835 { 00:23:46.835 "subsystem": "iobuf", 00:23:46.835 "config": [ 00:23:46.835 { 00:23:46.835 "method": "iobuf_set_options", 00:23:46.835 "params": { 00:23:46.835 "small_pool_count": 8192, 00:23:46.835 "large_pool_count": 1024, 00:23:46.835 "small_bufsize": 8192, 00:23:46.835 "large_bufsize": 135168, 00:23:46.835 "enable_numa": false 00:23:46.835 } 00:23:46.835 } 00:23:46.835 ] 00:23:46.835 }, 00:23:46.835 { 00:23:46.835 "subsystem": "sock", 00:23:46.835 "config": [ 00:23:46.835 { 00:23:46.835 "method": "sock_set_default_impl", 00:23:46.835 "params": { 00:23:46.835 "impl_name": "posix" 00:23:46.835 } 00:23:46.835 }, 00:23:46.835 { 00:23:46.835 "method": "sock_impl_set_options", 00:23:46.835 "params": { 00:23:46.835 "impl_name": "ssl", 00:23:46.835 "recv_buf_size": 4096, 00:23:46.835 "send_buf_size": 4096, 00:23:46.835 "enable_recv_pipe": true, 00:23:46.835 "enable_quickack": false, 00:23:46.835 "enable_placement_id": 0, 00:23:46.835 "enable_zerocopy_send_server": true, 00:23:46.835 "enable_zerocopy_send_client": false, 00:23:46.835 "zerocopy_threshold": 0, 00:23:46.835 "tls_version": 0, 00:23:46.835 "enable_ktls": false 00:23:46.835 } 00:23:46.835 }, 00:23:46.835 { 00:23:46.835 "method": "sock_impl_set_options", 00:23:46.835 "params": { 00:23:46.835 "impl_name": "posix", 00:23:46.835 "recv_buf_size": 2097152, 00:23:46.835 "send_buf_size": 2097152, 00:23:46.835 "enable_recv_pipe": true, 00:23:46.835 "enable_quickack": false, 00:23:46.835 "enable_placement_id": 0, 00:23:46.835 "enable_zerocopy_send_server": true, 00:23:46.835 "enable_zerocopy_send_client": false, 00:23:46.835 "zerocopy_threshold": 0, 00:23:46.835 "tls_version": 0, 00:23:46.835 "enable_ktls": false 00:23:46.835 } 00:23:46.835 } 00:23:46.835 ] 00:23:46.835 }, 00:23:46.835 { 00:23:46.835 "subsystem": "vmd", 00:23:46.835 "config": [] 00:23:46.835 }, 00:23:46.835 { 00:23:46.835 "subsystem": "accel", 00:23:46.835 "config": [ 00:23:46.835 { 00:23:46.835 "method": "accel_set_options", 00:23:46.835 "params": { 00:23:46.835 "small_cache_size": 128, 00:23:46.835 "large_cache_size": 16, 00:23:46.835 "task_count": 2048, 00:23:46.835 "sequence_count": 2048, 00:23:46.835 "buf_count": 2048 00:23:46.835 } 00:23:46.835 } 00:23:46.835 ] 00:23:46.835 }, 00:23:46.835 { 00:23:46.835 "subsystem": "bdev", 00:23:46.835 "config": [ 00:23:46.835 { 00:23:46.835 "method": "bdev_set_options", 00:23:46.835 "params": { 00:23:46.835 "bdev_io_pool_size": 65535, 00:23:46.835 "bdev_io_cache_size": 256, 00:23:46.835 "bdev_auto_examine": true, 00:23:46.835 "iobuf_small_cache_size": 128, 00:23:46.835 "iobuf_large_cache_size": 16 00:23:46.835 } 00:23:46.835 }, 00:23:46.835 { 00:23:46.835 "method": "bdev_raid_set_options", 00:23:46.835 "params": { 00:23:46.835 "process_window_size_kb": 1024, 00:23:46.835 "process_max_bandwidth_mb_sec": 0 00:23:46.835 } 00:23:46.835 }, 00:23:46.835 { 00:23:46.835 "method": "bdev_iscsi_set_options", 00:23:46.835 "params": { 00:23:46.835 "timeout_sec": 30 00:23:46.835 } 00:23:46.835 }, 00:23:46.835 { 00:23:46.835 "method": "bdev_nvme_set_options", 00:23:46.835 "params": { 00:23:46.835 "action_on_timeout": "none", 00:23:46.835 "timeout_us": 0, 00:23:46.835 "timeout_admin_us": 0, 00:23:46.835 "keep_alive_timeout_ms": 10000, 00:23:46.835 "arbitration_burst": 0, 00:23:46.836 "low_priority_weight": 0, 00:23:46.836 "medium_priority_weight": 0, 00:23:46.836 "high_priority_weight": 0, 00:23:46.836 "nvme_adminq_poll_period_us": 10000, 00:23:46.836 "nvme_ioq_poll_period_us": 0, 00:23:46.836 "io_queue_requests": 0, 00:23:46.836 "delay_cmd_submit": true, 00:23:46.836 "transport_retry_count": 4, 00:23:46.836 "bdev_retry_count": 3, 00:23:46.836 "transport_ack_timeout": 0, 00:23:46.836 "ctrlr_loss_timeout_sec": 0, 00:23:46.836 "reconnect_delay_sec": 0, 00:23:46.836 "fast_io_fail_timeout_sec": 0, 00:23:46.836 "disable_auto_failback": false, 00:23:46.836 "generate_uuids": false, 00:23:46.836 "transport_tos": 0, 00:23:46.836 "nvme_error_stat": false, 00:23:46.836 "rdma_srq_size": 0, 00:23:46.836 "io_path_stat": false, 00:23:46.836 "allow_accel_sequence": false, 00:23:46.836 "rdma_max_cq_size": 0, 00:23:46.836 "rdma_cm_event_timeout_ms": 0, 00:23:46.836 "dhchap_digests": [ 00:23:46.836 "sha256", 00:23:46.836 "sha384", 00:23:46.836 "sha512" 00:23:46.836 ], 00:23:46.836 "dhchap_dhgroups": [ 00:23:46.836 "null", 00:23:46.836 "ffdhe2048", 00:23:46.836 "ffdhe3072", 00:23:46.836 "ffdhe4096", 00:23:46.836 "ffdhe6144", 00:23:46.836 "ffdhe8192" 00:23:46.836 ], 00:23:46.836 "rdma_umr_per_io": false 00:23:46.836 } 00:23:46.836 }, 00:23:46.836 { 00:23:46.836 "method": "bdev_nvme_set_hotplug", 00:23:46.836 "params": { 00:23:46.836 "period_us": 100000, 00:23:46.836 "enable": false 00:23:46.836 } 00:23:46.836 }, 00:23:46.836 { 00:23:46.836 "method": "bdev_malloc_create", 00:23:46.836 "params": { 00:23:46.836 "name": "malloc0", 00:23:46.836 "num_blocks": 8192, 00:23:46.836 "block_size": 4096, 00:23:46.836 "physical_block_size": 4096, 00:23:46.836 "uuid": "6af6b894-cbd9-45d1-a5bf-aff9545a0d63", 00:23:46.836 "optimal_io_boundary": 0, 00:23:46.836 "md_size": 0, 00:23:46.836 "dif_type": 0, 00:23:46.836 "dif_is_head_of_md": false, 00:23:46.836 "dif_pi_format": 0 00:23:46.836 } 00:23:46.836 }, 00:23:46.836 { 00:23:46.836 "method": "bdev_wait_for_examine" 00:23:46.836 } 00:23:46.836 ] 00:23:46.836 }, 00:23:46.836 { 00:23:46.836 "subsystem": "nbd", 00:23:46.836 "config": [] 00:23:46.836 }, 00:23:46.836 { 00:23:46.836 "subsystem": "scheduler", 00:23:46.836 "config": [ 00:23:46.836 { 00:23:46.836 "method": "framework_set_scheduler", 00:23:46.836 "params": { 00:23:46.836 "name": "static" 00:23:46.836 } 00:23:46.836 } 00:23:46.836 ] 00:23:46.836 }, 00:23:46.836 { 00:23:46.836 "subsystem": "nvmf", 00:23:46.836 "config": [ 00:23:46.836 { 00:23:46.836 "method": "nvmf_set_config", 00:23:46.836 "params": { 00:23:46.836 "discovery_filter": "match_any", 00:23:46.836 "admin_cmd_passthru": { 00:23:46.836 "identify_ctrlr": false 00:23:46.836 }, 00:23:46.836 "dhchap_digests": [ 00:23:46.836 "sha256", 00:23:46.836 "sha384", 00:23:46.836 "sha512" 00:23:46.836 ], 00:23:46.836 "dhchap_dhgroups": [ 00:23:46.836 "null", 00:23:46.836 "ffdhe2048", 00:23:46.836 "ffdhe3072", 00:23:46.836 "ffdhe4096", 00:23:46.836 "ffdhe6144", 00:23:46.836 "ffdhe8192" 00:23:46.836 ] 00:23:46.836 } 00:23:46.836 }, 00:23:46.836 { 00:23:46.836 "method": "nvmf_set_max_subsystems", 00:23:46.836 "params": { 00:23:46.836 "max_subsystems": 1024 00:23:46.836 } 00:23:46.836 }, 00:23:46.836 { 00:23:46.836 "method": "nvmf_set_crdt", 00:23:46.836 "params": { 00:23:46.836 "crdt1": 0, 00:23:46.836 "crdt2": 0, 00:23:46.836 "crdt3": 0 00:23:46.836 } 00:23:46.836 }, 00:23:46.836 { 00:23:46.836 "method": "nvmf_create_transport", 00:23:46.836 "params": { 00:23:46.836 "trtype": "TCP", 00:23:46.836 "max_queue_depth": 128, 00:23:46.836 "max_io_qpairs_per_ctrlr": 127, 00:23:46.836 "in_capsule_data_size": 4096, 00:23:46.836 "max_io_size": 131072, 00:23:46.836 "io_unit_size": 131072, 00:23:46.836 "max_aq_depth": 128, 00:23:46.836 "num_shared_buffers": 511, 00:23:46.836 "buf_cache_size": 4294967295, 00:23:46.836 "dif_insert_or_strip": false, 00:23:46.836 "zcopy": false, 00:23:46.836 "c2h_success": false, 00:23:46.836 "sock_priority": 0, 00:23:46.836 "abort_timeout_sec": 1, 00:23:46.836 "ack_timeout": 0, 00:23:46.836 "data_wr_pool_size": 0 00:23:46.836 } 00:23:46.836 }, 00:23:46.836 { 00:23:46.836 "method": "nvmf_create_subsystem", 00:23:46.836 "params": { 00:23:46.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.836 "allow_any_host": false, 00:23:46.836 "serial_number": "00000000000000000000", 00:23:46.836 "model_number": "SPDK bdev Controller", 00:23:46.836 "max_namespaces": 32, 00:23:46.836 "min_cntlid": 1, 00:23:46.836 "max_cntlid": 65519, 00:23:46.836 "ana_reporting": false 00:23:46.836 } 00:23:46.836 }, 00:23:46.836 { 00:23:46.836 "method": "nvmf_subsystem_add_host", 00:23:46.836 "params": { 00:23:46.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.836 "host": "nqn.2016-06.io.spdk:host1", 00:23:46.836 "psk": "key0" 00:23:46.836 } 00:23:46.836 }, 00:23:46.836 { 00:23:46.836 "method": "nvmf_subsystem_add_ns", 00:23:46.836 "params": { 00:23:46.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.836 "namespace": { 00:23:46.836 "nsid": 1, 00:23:46.836 "bdev_name": "malloc0", 00:23:46.836 "nguid": "6AF6B894CBD945D1A5BFAFF9545A0D63", 00:23:46.836 "uuid": "6af6b894-cbd9-45d1-a5bf-aff9545a0d63", 00:23:46.836 "no_auto_visible": false 00:23:46.836 } 00:23:46.836 } 00:23:46.836 }, 00:23:46.836 { 00:23:46.836 "method": "nvmf_subsystem_add_listener", 00:23:46.836 "params": { 00:23:46.836 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.836 "listen_address": { 00:23:46.836 "trtype": "TCP", 00:23:46.836 "adrfam": "IPv4", 00:23:46.836 "traddr": "10.0.0.2", 00:23:46.836 "trsvcid": "4420" 00:23:46.836 }, 00:23:46.836 "secure_channel": false, 00:23:46.836 "sock_impl": "ssl" 00:23:46.836 } 00:23:46.836 } 00:23:46.836 ] 00:23:46.836 } 00:23:46.836 ] 00:23:46.836 }' 00:23:46.836 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:47.095 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:47.095 "subsystems": [ 00:23:47.095 { 00:23:47.095 "subsystem": "keyring", 00:23:47.095 "config": [ 00:23:47.095 { 00:23:47.096 "method": "keyring_file_add_key", 00:23:47.096 "params": { 00:23:47.096 "name": "key0", 00:23:47.096 "path": "/tmp/tmp.vFEfGfbcOR" 00:23:47.096 } 00:23:47.096 } 00:23:47.096 ] 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "subsystem": "iobuf", 00:23:47.096 "config": [ 00:23:47.096 { 00:23:47.096 "method": "iobuf_set_options", 00:23:47.096 "params": { 00:23:47.096 "small_pool_count": 8192, 00:23:47.096 "large_pool_count": 1024, 00:23:47.096 "small_bufsize": 8192, 00:23:47.096 "large_bufsize": 135168, 00:23:47.096 "enable_numa": false 00:23:47.096 } 00:23:47.096 } 00:23:47.096 ] 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "subsystem": "sock", 00:23:47.096 "config": [ 00:23:47.096 { 00:23:47.096 "method": "sock_set_default_impl", 00:23:47.096 "params": { 00:23:47.096 "impl_name": "posix" 00:23:47.096 } 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "method": "sock_impl_set_options", 00:23:47.096 "params": { 00:23:47.096 "impl_name": "ssl", 00:23:47.096 "recv_buf_size": 4096, 00:23:47.096 "send_buf_size": 4096, 00:23:47.096 "enable_recv_pipe": true, 00:23:47.096 "enable_quickack": false, 00:23:47.096 "enable_placement_id": 0, 00:23:47.096 "enable_zerocopy_send_server": true, 00:23:47.096 "enable_zerocopy_send_client": false, 00:23:47.096 "zerocopy_threshold": 0, 00:23:47.096 "tls_version": 0, 00:23:47.096 "enable_ktls": false 00:23:47.096 } 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "method": "sock_impl_set_options", 00:23:47.096 "params": { 00:23:47.096 "impl_name": "posix", 00:23:47.096 "recv_buf_size": 2097152, 00:23:47.096 "send_buf_size": 2097152, 00:23:47.096 "enable_recv_pipe": true, 00:23:47.096 "enable_quickack": false, 00:23:47.096 "enable_placement_id": 0, 00:23:47.096 "enable_zerocopy_send_server": true, 00:23:47.096 "enable_zerocopy_send_client": false, 00:23:47.096 "zerocopy_threshold": 0, 00:23:47.096 "tls_version": 0, 00:23:47.096 "enable_ktls": false 00:23:47.096 } 00:23:47.096 } 00:23:47.096 ] 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "subsystem": "vmd", 00:23:47.096 "config": [] 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "subsystem": "accel", 00:23:47.096 "config": [ 00:23:47.096 { 00:23:47.096 "method": "accel_set_options", 00:23:47.096 "params": { 00:23:47.096 "small_cache_size": 128, 00:23:47.096 "large_cache_size": 16, 00:23:47.096 "task_count": 2048, 00:23:47.096 "sequence_count": 2048, 00:23:47.096 "buf_count": 2048 00:23:47.096 } 00:23:47.096 } 00:23:47.096 ] 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "subsystem": "bdev", 00:23:47.096 "config": [ 00:23:47.096 { 00:23:47.096 "method": "bdev_set_options", 00:23:47.096 "params": { 00:23:47.096 "bdev_io_pool_size": 65535, 00:23:47.096 "bdev_io_cache_size": 256, 00:23:47.096 "bdev_auto_examine": true, 00:23:47.096 "iobuf_small_cache_size": 128, 00:23:47.096 "iobuf_large_cache_size": 16 00:23:47.096 } 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "method": "bdev_raid_set_options", 00:23:47.096 "params": { 00:23:47.096 "process_window_size_kb": 1024, 00:23:47.096 "process_max_bandwidth_mb_sec": 0 00:23:47.096 } 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "method": "bdev_iscsi_set_options", 00:23:47.096 "params": { 00:23:47.096 "timeout_sec": 30 00:23:47.096 } 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "method": "bdev_nvme_set_options", 00:23:47.096 "params": { 00:23:47.096 "action_on_timeout": "none", 00:23:47.096 "timeout_us": 0, 00:23:47.096 "timeout_admin_us": 0, 00:23:47.096 "keep_alive_timeout_ms": 10000, 00:23:47.096 "arbitration_burst": 0, 00:23:47.096 "low_priority_weight": 0, 00:23:47.096 "medium_priority_weight": 0, 00:23:47.096 "high_priority_weight": 0, 00:23:47.096 "nvme_adminq_poll_period_us": 10000, 00:23:47.096 "nvme_ioq_poll_period_us": 0, 00:23:47.096 "io_queue_requests": 512, 00:23:47.096 "delay_cmd_submit": true, 00:23:47.096 "transport_retry_count": 4, 00:23:47.096 "bdev_retry_count": 3, 00:23:47.096 "transport_ack_timeout": 0, 00:23:47.096 "ctrlr_loss_timeout_sec": 0, 00:23:47.096 "reconnect_delay_sec": 0, 00:23:47.096 "fast_io_fail_timeout_sec": 0, 00:23:47.096 "disable_auto_failback": false, 00:23:47.096 "generate_uuids": false, 00:23:47.096 "transport_tos": 0, 00:23:47.096 "nvme_error_stat": false, 00:23:47.096 "rdma_srq_size": 0, 00:23:47.096 "io_path_stat": false, 00:23:47.096 "allow_accel_sequence": false, 00:23:47.096 "rdma_max_cq_size": 0, 00:23:47.096 "rdma_cm_event_timeout_ms": 0, 00:23:47.096 "dhchap_digests": [ 00:23:47.096 "sha256", 00:23:47.096 "sha384", 00:23:47.096 "sha512" 00:23:47.096 ], 00:23:47.096 "dhchap_dhgroups": [ 00:23:47.096 "null", 00:23:47.096 "ffdhe2048", 00:23:47.096 "ffdhe3072", 00:23:47.096 "ffdhe4096", 00:23:47.096 "ffdhe6144", 00:23:47.096 "ffdhe8192" 00:23:47.096 ], 00:23:47.096 "rdma_umr_per_io": false 00:23:47.096 } 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "method": "bdev_nvme_attach_controller", 00:23:47.096 "params": { 00:23:47.096 "name": "nvme0", 00:23:47.096 "trtype": "TCP", 00:23:47.096 "adrfam": "IPv4", 00:23:47.096 "traddr": "10.0.0.2", 00:23:47.096 "trsvcid": "4420", 00:23:47.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.096 "prchk_reftag": false, 00:23:47.096 "prchk_guard": false, 00:23:47.096 "ctrlr_loss_timeout_sec": 0, 00:23:47.096 "reconnect_delay_sec": 0, 00:23:47.096 "fast_io_fail_timeout_sec": 0, 00:23:47.096 "psk": "key0", 00:23:47.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:47.096 "hdgst": false, 00:23:47.096 "ddgst": false, 00:23:47.096 "multipath": "multipath" 00:23:47.096 } 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "method": "bdev_nvme_set_hotplug", 00:23:47.096 "params": { 00:23:47.096 "period_us": 100000, 00:23:47.096 "enable": false 00:23:47.096 } 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "method": "bdev_enable_histogram", 00:23:47.096 "params": { 00:23:47.096 "name": "nvme0n1", 00:23:47.096 "enable": true 00:23:47.096 } 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "method": "bdev_wait_for_examine" 00:23:47.096 } 00:23:47.096 ] 00:23:47.096 }, 00:23:47.096 { 00:23:47.096 "subsystem": "nbd", 00:23:47.096 "config": [] 00:23:47.096 } 00:23:47.096 ] 00:23:47.096 }' 00:23:47.096 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1026112 00:23:47.096 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1026112 ']' 00:23:47.096 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1026112 00:23:47.096 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:47.096 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.096 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1026112 00:23:47.096 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:47.096 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:47.096 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1026112' 00:23:47.096 killing process with pid 1026112 00:23:47.096 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1026112 00:23:47.096 Received shutdown signal, test time was about 1.000000 seconds 00:23:47.096 00:23:47.096 Latency(us) 00:23:47.096 [2024-12-16T01:45:17.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.096 [2024-12-16T01:45:17.755Z] =================================================================================================================== 00:23:47.096 [2024-12-16T01:45:17.755Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:47.096 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1026112 00:23:47.356 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1025917 00:23:47.356 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1025917 ']' 00:23:47.356 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1025917 00:23:47.356 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:47.356 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:47.356 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025917 00:23:47.356 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:47.356 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:47.356 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025917' 00:23:47.356 killing process with pid 1025917 00:23:47.356 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1025917 00:23:47.356 02:45:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1025917 00:23:47.615 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:47.615 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:47.616 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:47.616 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:47.616 "subsystems": [ 00:23:47.616 { 00:23:47.616 "subsystem": "keyring", 00:23:47.616 "config": [ 00:23:47.616 { 00:23:47.616 "method": "keyring_file_add_key", 00:23:47.616 "params": { 00:23:47.616 "name": "key0", 00:23:47.616 "path": "/tmp/tmp.vFEfGfbcOR" 00:23:47.616 } 00:23:47.616 } 00:23:47.616 ] 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "subsystem": "iobuf", 00:23:47.616 "config": [ 00:23:47.616 { 00:23:47.616 "method": "iobuf_set_options", 00:23:47.616 "params": { 00:23:47.616 "small_pool_count": 8192, 00:23:47.616 "large_pool_count": 1024, 00:23:47.616 "small_bufsize": 8192, 00:23:47.616 "large_bufsize": 135168, 00:23:47.616 "enable_numa": false 00:23:47.616 } 00:23:47.616 } 00:23:47.616 ] 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "subsystem": "sock", 00:23:47.616 "config": [ 00:23:47.616 { 00:23:47.616 "method": "sock_set_default_impl", 00:23:47.616 "params": { 00:23:47.616 "impl_name": "posix" 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "sock_impl_set_options", 00:23:47.616 "params": { 00:23:47.616 "impl_name": "ssl", 00:23:47.616 "recv_buf_size": 4096, 00:23:47.616 "send_buf_size": 4096, 00:23:47.616 "enable_recv_pipe": true, 00:23:47.616 "enable_quickack": false, 00:23:47.616 "enable_placement_id": 0, 00:23:47.616 "enable_zerocopy_send_server": true, 00:23:47.616 "enable_zerocopy_send_client": false, 00:23:47.616 "zerocopy_threshold": 0, 00:23:47.616 "tls_version": 0, 00:23:47.616 "enable_ktls": false 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "sock_impl_set_options", 00:23:47.616 "params": { 00:23:47.616 "impl_name": "posix", 00:23:47.616 "recv_buf_size": 2097152, 00:23:47.616 "send_buf_size": 2097152, 00:23:47.616 "enable_recv_pipe": true, 00:23:47.616 "enable_quickack": false, 00:23:47.616 "enable_placement_id": 0, 00:23:47.616 "enable_zerocopy_send_server": true, 00:23:47.616 "enable_zerocopy_send_client": false, 00:23:47.616 "zerocopy_threshold": 0, 00:23:47.616 "tls_version": 0, 00:23:47.616 "enable_ktls": false 00:23:47.616 } 00:23:47.616 } 00:23:47.616 ] 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "subsystem": "vmd", 00:23:47.616 "config": [] 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "subsystem": "accel", 00:23:47.616 "config": [ 00:23:47.616 { 00:23:47.616 "method": "accel_set_options", 00:23:47.616 "params": { 00:23:47.616 "small_cache_size": 128, 00:23:47.616 "large_cache_size": 16, 00:23:47.616 "task_count": 2048, 00:23:47.616 "sequence_count": 2048, 00:23:47.616 "buf_count": 2048 00:23:47.616 } 00:23:47.616 } 00:23:47.616 ] 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "subsystem": "bdev", 00:23:47.616 "config": [ 00:23:47.616 { 00:23:47.616 "method": "bdev_set_options", 00:23:47.616 "params": { 00:23:47.616 "bdev_io_pool_size": 65535, 00:23:47.616 "bdev_io_cache_size": 256, 00:23:47.616 "bdev_auto_examine": true, 00:23:47.616 "iobuf_small_cache_size": 128, 00:23:47.616 "iobuf_large_cache_size": 16 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "bdev_raid_set_options", 00:23:47.616 "params": { 00:23:47.616 "process_window_size_kb": 1024, 00:23:47.616 "process_max_bandwidth_mb_sec": 0 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "bdev_iscsi_set_options", 00:23:47.616 "params": { 00:23:47.616 "timeout_sec": 30 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "bdev_nvme_set_options", 00:23:47.616 "params": { 00:23:47.616 "action_on_timeout": "none", 00:23:47.616 "timeout_us": 0, 00:23:47.616 "timeout_admin_us": 0, 00:23:47.616 "keep_alive_timeout_ms": 10000, 00:23:47.616 "arbitration_burst": 0, 00:23:47.616 "low_priority_weight": 0, 00:23:47.616 "medium_priority_weight": 0, 00:23:47.616 "high_priority_weight": 0, 00:23:47.616 "nvme_adminq_poll_period_us": 10000, 00:23:47.616 "nvme_ioq_poll_period_us": 0, 00:23:47.616 "io_queue_requests": 0, 00:23:47.616 "delay_cmd_submit": true, 00:23:47.616 "transport_retry_count": 4, 00:23:47.616 "bdev_retry_count": 3, 00:23:47.616 "transport_ack_timeout": 0, 00:23:47.616 "ctrlr_loss_timeout_sec": 0, 00:23:47.616 "reconnect_delay_sec": 0, 00:23:47.616 "fast_io_fail_timeout_sec": 0, 00:23:47.616 "disable_auto_failback": false, 00:23:47.616 "generate_uuids": false, 00:23:47.616 "transport_tos": 0, 00:23:47.616 "nvme_error_stat": false, 00:23:47.616 "rdma_srq_size": 0, 00:23:47.616 "io_path_stat": false, 00:23:47.616 "allow_accel_sequence": false, 00:23:47.616 "rdma_max_cq_size": 0, 00:23:47.616 "rdma_cm_event_timeout_ms": 0, 00:23:47.616 "dhchap_digests": [ 00:23:47.616 "sha256", 00:23:47.616 "sha384", 00:23:47.616 "sha512" 00:23:47.616 ], 00:23:47.616 "dhchap_dhgroups": [ 00:23:47.616 "null", 00:23:47.616 "ffdhe2048", 00:23:47.616 "ffdhe3072", 00:23:47.616 "ffdhe4096", 00:23:47.616 "ffdhe6144", 00:23:47.616 "ffdhe8192" 00:23:47.616 ], 00:23:47.616 "rdma_umr_per_io": false 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "bdev_nvme_set_hotplug", 00:23:47.616 "params": { 00:23:47.616 "period_us": 100000, 00:23:47.616 "enable": false 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "bdev_malloc_create", 00:23:47.616 "params": { 00:23:47.616 "name": "malloc0", 00:23:47.616 "num_blocks": 8192, 00:23:47.616 "block_size": 4096, 00:23:47.616 "physical_block_size": 4096, 00:23:47.616 "uuid": "6af6b894-cbd9-45d1-a5bf-aff9545a0d63", 00:23:47.616 "optimal_io_boundary": 0, 00:23:47.616 "md_size": 0, 00:23:47.616 "dif_type": 0, 00:23:47.616 "dif_is_head_of_md": false, 00:23:47.616 "dif_pi_format": 0 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "bdev_wait_for_examine" 00:23:47.616 } 00:23:47.616 ] 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "subsystem": "nbd", 00:23:47.616 "config": [] 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "subsystem": "scheduler", 00:23:47.616 "config": [ 00:23:47.616 { 00:23:47.616 "method": "framework_set_scheduler", 00:23:47.616 "params": { 00:23:47.616 "name": "static" 00:23:47.616 } 00:23:47.616 } 00:23:47.616 ] 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "subsystem": "nvmf", 00:23:47.616 "config": [ 00:23:47.616 { 00:23:47.616 "method": "nvmf_set_config", 00:23:47.616 "params": { 00:23:47.616 "discovery_filter": "match_any", 00:23:47.616 "admin_cmd_passthru": { 00:23:47.616 "identify_ctrlr": false 00:23:47.616 }, 00:23:47.616 "dhchap_digests": [ 00:23:47.616 "sha256", 00:23:47.616 "sha384", 00:23:47.616 "sha512" 00:23:47.616 ], 00:23:47.616 "dhchap_dhgroups": [ 00:23:47.616 "null", 00:23:47.616 "ffdhe2048", 00:23:47.616 "ffdhe3072", 00:23:47.616 "ffdhe4096", 00:23:47.616 "ffdhe6144", 00:23:47.616 "ffdhe8192" 00:23:47.616 ] 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "nvmf_set_max_subsystems", 00:23:47.616 "params": { 00:23:47.616 "max_subsystems": 1024 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "nvmf_set_crdt", 00:23:47.616 "params": { 00:23:47.616 "crdt1": 0, 00:23:47.616 "crdt2": 0, 00:23:47.616 "crdt3": 0 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "nvmf_create_transport", 00:23:47.616 "params": { 00:23:47.616 "trtype": "TCP", 00:23:47.616 "max_queue_depth": 128, 00:23:47.616 "max_io_qpairs_per_ctrlr": 127, 00:23:47.616 "in_capsule_data_size": 4096, 00:23:47.616 "max_io_size": 131072, 00:23:47.616 "io_unit_size": 131072, 00:23:47.616 "max_aq_depth": 128, 00:23:47.616 "num_shared_buffers": 511, 00:23:47.616 "buf_cache_size": 4294967295, 00:23:47.616 "dif_insert_or_strip": false, 00:23:47.616 "zcopy": false, 00:23:47.616 "c2h_success": false, 00:23:47.616 "sock_priority": 0, 00:23:47.616 "abort_timeout_sec": 1, 00:23:47.616 "ack_timeout": 0, 00:23:47.616 "data_wr_pool_size": 0 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "nvmf_create_subsystem", 00:23:47.616 "params": { 00:23:47.616 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.616 "allow_any_host": false, 00:23:47.616 "serial_number": "00000000000000000000", 00:23:47.616 "model_number": "SPDK bdev Controller", 00:23:47.616 "max_namespaces": 32, 00:23:47.616 "min_cntlid": 1, 00:23:47.616 "max_cntlid": 65519, 00:23:47.616 "ana_reporting": false 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "nvmf_subsystem_add_host", 00:23:47.616 "params": { 00:23:47.616 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.616 "host": "nqn.2016-06.io.spdk:host1", 00:23:47.616 "psk": "key0" 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "nvmf_subsystem_add_ns", 00:23:47.616 "params": { 00:23:47.616 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.616 "namespace": { 00:23:47.616 "nsid": 1, 00:23:47.616 "bdev_name": "malloc0", 00:23:47.616 "nguid": "6AF6B894CBD945D1A5BFAFF9545A0D63", 00:23:47.616 "uuid": "6af6b894-cbd9-45d1-a5bf-aff9545a0d63", 00:23:47.616 "no_auto_visible": false 00:23:47.616 } 00:23:47.616 } 00:23:47.616 }, 00:23:47.616 { 00:23:47.616 "method": "nvmf_subsystem_add_listener", 00:23:47.617 "params": { 00:23:47.617 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.617 "listen_address": { 00:23:47.617 "trtype": "TCP", 00:23:47.617 "adrfam": "IPv4", 00:23:47.617 "traddr": "10.0.0.2", 00:23:47.617 "trsvcid": "4420" 00:23:47.617 }, 00:23:47.617 "secure_channel": false, 00:23:47.617 "sock_impl": "ssl" 00:23:47.617 } 00:23:47.617 } 00:23:47.617 ] 00:23:47.617 } 00:23:47.617 ] 00:23:47.617 }' 00:23:47.617 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.617 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1026463 00:23:47.617 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:47.617 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1026463 00:23:47.617 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1026463 ']' 00:23:47.617 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.617 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:47.617 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.617 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:47.617 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:47.617 [2024-12-16 02:45:18.084437] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:47.617 [2024-12-16 02:45:18.084483] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.617 [2024-12-16 02:45:18.162821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.617 [2024-12-16 02:45:18.182658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.617 [2024-12-16 02:45:18.182695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.617 [2024-12-16 02:45:18.182702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.617 [2024-12-16 02:45:18.182707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.617 [2024-12-16 02:45:18.182712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.617 [2024-12-16 02:45:18.183272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.876 [2024-12-16 02:45:18.391925] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:47.876 [2024-12-16 02:45:18.423967] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:47.876 [2024-12-16 02:45:18.424185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1026642 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1026642 /var/tmp/bdevperf.sock 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1026642 ']' 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.444 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:48.444 "subsystems": [ 00:23:48.444 { 00:23:48.444 "subsystem": "keyring", 00:23:48.444 "config": [ 00:23:48.444 { 00:23:48.444 "method": "keyring_file_add_key", 00:23:48.444 "params": { 00:23:48.444 "name": "key0", 00:23:48.444 "path": "/tmp/tmp.vFEfGfbcOR" 00:23:48.444 } 00:23:48.444 } 00:23:48.444 ] 00:23:48.444 }, 00:23:48.444 { 00:23:48.444 "subsystem": "iobuf", 00:23:48.444 "config": [ 00:23:48.444 { 00:23:48.444 "method": "iobuf_set_options", 00:23:48.444 "params": { 00:23:48.444 "small_pool_count": 8192, 00:23:48.444 "large_pool_count": 1024, 00:23:48.444 "small_bufsize": 8192, 00:23:48.444 "large_bufsize": 135168, 00:23:48.444 "enable_numa": false 00:23:48.444 } 00:23:48.444 } 00:23:48.444 ] 00:23:48.444 }, 00:23:48.444 { 00:23:48.444 "subsystem": "sock", 00:23:48.444 "config": [ 00:23:48.444 { 00:23:48.444 "method": "sock_set_default_impl", 00:23:48.444 "params": { 00:23:48.444 "impl_name": "posix" 00:23:48.444 } 00:23:48.444 }, 00:23:48.444 { 00:23:48.444 "method": "sock_impl_set_options", 00:23:48.444 "params": { 00:23:48.444 "impl_name": "ssl", 00:23:48.444 "recv_buf_size": 4096, 00:23:48.444 "send_buf_size": 4096, 00:23:48.444 "enable_recv_pipe": true, 00:23:48.444 "enable_quickack": false, 00:23:48.444 "enable_placement_id": 0, 00:23:48.444 "enable_zerocopy_send_server": true, 00:23:48.444 "enable_zerocopy_send_client": false, 00:23:48.444 "zerocopy_threshold": 0, 00:23:48.444 "tls_version": 0, 00:23:48.444 "enable_ktls": false 00:23:48.444 } 00:23:48.444 }, 00:23:48.444 { 00:23:48.444 "method": "sock_impl_set_options", 00:23:48.444 "params": { 00:23:48.444 "impl_name": "posix", 00:23:48.444 "recv_buf_size": 2097152, 00:23:48.444 "send_buf_size": 2097152, 00:23:48.444 "enable_recv_pipe": true, 00:23:48.444 "enable_quickack": false, 00:23:48.444 "enable_placement_id": 0, 00:23:48.444 "enable_zerocopy_send_server": true, 00:23:48.444 "enable_zerocopy_send_client": false, 00:23:48.444 "zerocopy_threshold": 0, 00:23:48.444 "tls_version": 0, 00:23:48.444 "enable_ktls": false 00:23:48.444 } 00:23:48.444 } 00:23:48.444 ] 00:23:48.444 }, 00:23:48.444 { 00:23:48.444 "subsystem": "vmd", 00:23:48.444 "config": [] 00:23:48.444 }, 00:23:48.444 { 00:23:48.444 "subsystem": "accel", 00:23:48.444 "config": [ 00:23:48.444 { 00:23:48.444 "method": "accel_set_options", 00:23:48.444 "params": { 00:23:48.444 "small_cache_size": 128, 00:23:48.444 "large_cache_size": 16, 00:23:48.444 "task_count": 2048, 00:23:48.444 "sequence_count": 2048, 00:23:48.444 "buf_count": 2048 00:23:48.444 } 00:23:48.444 } 00:23:48.444 ] 00:23:48.444 }, 00:23:48.444 { 00:23:48.444 "subsystem": "bdev", 00:23:48.444 "config": [ 00:23:48.444 { 00:23:48.444 "method": "bdev_set_options", 00:23:48.444 "params": { 00:23:48.444 "bdev_io_pool_size": 65535, 00:23:48.444 "bdev_io_cache_size": 256, 00:23:48.444 "bdev_auto_examine": true, 00:23:48.444 "iobuf_small_cache_size": 128, 00:23:48.444 "iobuf_large_cache_size": 16 00:23:48.444 } 00:23:48.444 }, 00:23:48.444 { 00:23:48.444 "method": "bdev_raid_set_options", 00:23:48.444 "params": { 00:23:48.444 "process_window_size_kb": 1024, 00:23:48.444 "process_max_bandwidth_mb_sec": 0 00:23:48.444 } 00:23:48.444 }, 00:23:48.444 { 00:23:48.445 "method": "bdev_iscsi_set_options", 00:23:48.445 "params": { 00:23:48.445 "timeout_sec": 30 00:23:48.445 } 00:23:48.445 }, 00:23:48.445 { 00:23:48.445 "method": "bdev_nvme_set_options", 00:23:48.445 "params": { 00:23:48.445 "action_on_timeout": "none", 00:23:48.445 "timeout_us": 0, 00:23:48.445 "timeout_admin_us": 0, 00:23:48.445 "keep_alive_timeout_ms": 10000, 00:23:48.445 "arbitration_burst": 0, 00:23:48.445 "low_priority_weight": 0, 00:23:48.445 "medium_priority_weight": 0, 00:23:48.445 "high_priority_weight": 0, 00:23:48.445 "nvme_adminq_poll_period_us": 10000, 00:23:48.445 "nvme_ioq_poll_period_us": 0, 00:23:48.445 "io_queue_requests": 512, 00:23:48.445 "delay_cmd_submit": true, 00:23:48.445 "transport_retry_count": 4, 00:23:48.445 "bdev_retry_count": 3, 00:23:48.445 "transport_ack_timeout": 0, 00:23:48.445 "ctrlr_loss_timeout_sec": 0, 00:23:48.445 "reconnect_delay_sec": 0, 00:23:48.445 "fast_io_fail_timeout_sec": 0, 00:23:48.445 "disable_auto_failback": false, 00:23:48.445 "generate_uuids": false, 00:23:48.445 "transport_tos": 0, 00:23:48.445 "nvme_error_stat": false, 00:23:48.445 "rdma_srq_size": 0, 00:23:48.445 "io_path_stat": false, 00:23:48.445 "allow_accel_sequence": false, 00:23:48.445 "rdma_max_cq_size": 0, 00:23:48.445 "rdma_cm_event_timeout_ms": 0, 00:23:48.445 "dhchap_digests": [ 00:23:48.445 "sha256", 00:23:48.445 "sha384", 00:23:48.445 "sha512" 00:23:48.445 ], 00:23:48.445 "dhchap_dhgroups": [ 00:23:48.445 "null", 00:23:48.445 "ffdhe2048", 00:23:48.445 "ffdhe3072", 00:23:48.445 "ffdhe4096", 00:23:48.445 "ffdhe6144", 00:23:48.445 "ffdhe8192" 00:23:48.445 ], 00:23:48.445 "rdma_umr_per_io": false 00:23:48.445 } 00:23:48.445 }, 00:23:48.445 { 00:23:48.445 "method": "bdev_nvme_attach_controller", 00:23:48.445 "params": { 00:23:48.445 "name": "nvme0", 00:23:48.445 "trtype": "TCP", 00:23:48.445 "adrfam": "IPv4", 00:23:48.445 "traddr": "10.0.0.2", 00:23:48.445 "trsvcid": "4420", 00:23:48.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.445 "prchk_reftag": false, 00:23:48.445 "prchk_guard": false, 00:23:48.445 "ctrlr_loss_timeout_sec": 0, 00:23:48.445 "reconnect_delay_sec": 0, 00:23:48.445 "fast_io_fail_timeout_sec": 0, 00:23:48.445 "psk": "key0", 00:23:48.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.445 "hdgst": false, 00:23:48.445 "ddgst": false, 00:23:48.445 "multipath": "multipath" 00:23:48.445 } 00:23:48.445 }, 00:23:48.445 { 00:23:48.445 "method": "bdev_nvme_set_hotplug", 00:23:48.445 "params": { 00:23:48.445 "period_us": 100000, 00:23:48.445 "enable": false 00:23:48.445 } 00:23:48.445 }, 00:23:48.445 { 00:23:48.445 "method": "bdev_enable_histogram", 00:23:48.445 "params": { 00:23:48.445 "name": "nvme0n1", 00:23:48.445 "enable": true 00:23:48.445 } 00:23:48.445 }, 00:23:48.445 { 00:23:48.445 "method": "bdev_wait_for_examine" 00:23:48.445 } 00:23:48.445 ] 00:23:48.445 }, 00:23:48.445 { 00:23:48.445 "subsystem": "nbd", 00:23:48.445 "config": [] 00:23:48.445 } 00:23:48.445 ] 00:23:48.445 }' 00:23:48.445 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.445 02:45:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.445 [2024-12-16 02:45:19.018854] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:48.445 [2024-12-16 02:45:19.018904] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026642 ] 00:23:48.445 [2024-12-16 02:45:19.094170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.704 [2024-12-16 02:45:19.116218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.704 [2024-12-16 02:45:19.263499] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:49.271 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:49.271 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:49.271 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:49.271 02:45:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:49.530 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.530 02:45:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:49.530 Running I/O for 1 seconds... 00:23:50.907 5310.00 IOPS, 20.74 MiB/s 00:23:50.907 Latency(us) 00:23:50.907 [2024-12-16T01:45:21.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.907 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:50.907 Verification LBA range: start 0x0 length 0x2000 00:23:50.907 nvme0n1 : 1.01 5370.34 20.98 0.00 0.00 23678.19 5430.13 40445.07 00:23:50.907 [2024-12-16T01:45:21.566Z] =================================================================================================================== 00:23:50.907 [2024-12-16T01:45:21.566Z] Total : 5370.34 20.98 0.00 0.00 23678.19 5430.13 40445.07 00:23:50.907 { 00:23:50.907 "results": [ 00:23:50.907 { 00:23:50.907 "job": "nvme0n1", 00:23:50.907 "core_mask": "0x2", 00:23:50.907 "workload": "verify", 00:23:50.907 "status": "finished", 00:23:50.907 "verify_range": { 00:23:50.907 "start": 0, 00:23:50.907 "length": 8192 00:23:50.907 }, 00:23:50.907 "queue_depth": 128, 00:23:50.907 "io_size": 4096, 00:23:50.907 "runtime": 1.012785, 00:23:50.907 "iops": 5370.340200536145, 00:23:50.907 "mibps": 20.977891408344316, 00:23:50.907 "io_failed": 0, 00:23:50.907 "io_timeout": 0, 00:23:50.907 "avg_latency_us": 23678.18566087954, 00:23:50.907 "min_latency_us": 5430.125714285714, 00:23:50.907 "max_latency_us": 40445.07428571428 00:23:50.907 } 00:23:50.907 ], 00:23:50.907 "core_count": 1 00:23:50.907 } 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:50.907 nvmf_trace.0 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1026642 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1026642 ']' 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1026642 00:23:50.907 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1026642 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1026642' 00:23:50.908 killing process with pid 1026642 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1026642 00:23:50.908 Received shutdown signal, test time was about 1.000000 seconds 00:23:50.908 00:23:50.908 Latency(us) 00:23:50.908 [2024-12-16T01:45:21.567Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:50.908 [2024-12-16T01:45:21.567Z] =================================================================================================================== 00:23:50.908 [2024-12-16T01:45:21.567Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1026642 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:50.908 rmmod nvme_tcp 00:23:50.908 rmmod nvme_fabrics 00:23:50.908 rmmod nvme_keyring 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1026463 ']' 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1026463 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1026463 ']' 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1026463 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:50.908 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1026463 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1026463' 00:23:51.167 killing process with pid 1026463 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1026463 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1026463 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:51.167 02:45:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.704 02:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:53.704 02:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.gwSzkocZTq /tmp/tmp.eMML9GoPVw /tmp/tmp.vFEfGfbcOR 00:23:53.704 00:23:53.704 real 1m18.818s 00:23:53.704 user 1m59.963s 00:23:53.704 sys 0m30.842s 00:23:53.704 02:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:53.704 02:45:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.704 ************************************ 00:23:53.704 END TEST nvmf_tls 00:23:53.704 ************************************ 00:23:53.704 02:45:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:53.704 02:45:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:53.704 02:45:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:53.704 02:45:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:53.704 ************************************ 00:23:53.704 START TEST nvmf_fips 00:23:53.704 ************************************ 00:23:53.704 02:45:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:53.704 * Looking for test storage... 00:23:53.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:53.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.704 --rc genhtml_branch_coverage=1 00:23:53.704 --rc genhtml_function_coverage=1 00:23:53.704 --rc genhtml_legend=1 00:23:53.704 --rc geninfo_all_blocks=1 00:23:53.704 --rc geninfo_unexecuted_blocks=1 00:23:53.704 00:23:53.704 ' 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:53.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.704 --rc genhtml_branch_coverage=1 00:23:53.704 --rc genhtml_function_coverage=1 00:23:53.704 --rc genhtml_legend=1 00:23:53.704 --rc geninfo_all_blocks=1 00:23:53.704 --rc geninfo_unexecuted_blocks=1 00:23:53.704 00:23:53.704 ' 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:53.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.704 --rc genhtml_branch_coverage=1 00:23:53.704 --rc genhtml_function_coverage=1 00:23:53.704 --rc genhtml_legend=1 00:23:53.704 --rc geninfo_all_blocks=1 00:23:53.704 --rc geninfo_unexecuted_blocks=1 00:23:53.704 00:23:53.704 ' 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:53.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:53.704 --rc genhtml_branch_coverage=1 00:23:53.704 --rc genhtml_function_coverage=1 00:23:53.704 --rc genhtml_legend=1 00:23:53.704 --rc geninfo_all_blocks=1 00:23:53.704 --rc geninfo_unexecuted_blocks=1 00:23:53.704 00:23:53.704 ' 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.704 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:53.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:53.705 Error setting digest 00:23:53.705 40F206FBFE7E0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:53.705 40F206FBFE7E0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.705 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:53.706 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:53.706 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:53.706 02:45:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:00.272 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.272 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:00.272 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:00.272 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:00.272 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:00.272 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:00.272 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:00.272 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:00.273 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:00.273 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:00.273 Found net devices under 0000:af:00.0: cvl_0_0 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:00.273 Found net devices under 0000:af:00.1: cvl_0_1 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.273 02:45:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:00.273 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.273 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.518 ms 00:24:00.273 00:24:00.273 --- 10.0.0.2 ping statistics --- 00:24:00.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.273 rtt min/avg/max/mdev = 0.518/0.518/0.518/0.000 ms 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.273 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.273 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:24:00.273 00:24:00.273 --- 10.0.0.1 ping statistics --- 00:24:00.273 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.273 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:00.273 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1030591 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1030591 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1030591 ']' 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:00.274 [2024-12-16 02:45:30.359921] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:00.274 [2024-12-16 02:45:30.359967] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.274 [2024-12-16 02:45:30.438575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.274 [2024-12-16 02:45:30.459223] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.274 [2024-12-16 02:45:30.459256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.274 [2024-12-16 02:45:30.459262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.274 [2024-12-16 02:45:30.459268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.274 [2024-12-16 02:45:30.459273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.274 [2024-12-16 02:45:30.459739] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.ddi 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.ddi 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.ddi 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.ddi 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:00.274 [2024-12-16 02:45:30.773614] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.274 [2024-12-16 02:45:30.789619] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:00.274 [2024-12-16 02:45:30.789804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.274 malloc0 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1030726 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1030726 /var/tmp/bdevperf.sock 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1030726 ']' 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.274 02:45:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:00.274 [2024-12-16 02:45:30.920630] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:00.274 [2024-12-16 02:45:30.920684] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030726 ] 00:24:00.533 [2024-12-16 02:45:30.994929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.533 [2024-12-16 02:45:31.016987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.533 02:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.533 02:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:00.533 02:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.ddi 00:24:00.792 02:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:01.050 [2024-12-16 02:45:31.467933] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:01.050 TLSTESTn1 00:24:01.050 02:45:31 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:01.050 Running I/O for 10 seconds... 00:24:03.361 5340.00 IOPS, 20.86 MiB/s [2024-12-16T01:45:34.954Z] 5425.00 IOPS, 21.19 MiB/s [2024-12-16T01:45:35.891Z] 5497.67 IOPS, 21.48 MiB/s [2024-12-16T01:45:36.826Z] 5488.25 IOPS, 21.44 MiB/s [2024-12-16T01:45:37.762Z] 5488.00 IOPS, 21.44 MiB/s [2024-12-16T01:45:38.698Z] 5473.50 IOPS, 21.38 MiB/s [2024-12-16T01:45:40.073Z] 5493.14 IOPS, 21.46 MiB/s [2024-12-16T01:45:41.007Z] 5496.62 IOPS, 21.47 MiB/s [2024-12-16T01:45:41.943Z] 5486.33 IOPS, 21.43 MiB/s [2024-12-16T01:45:41.943Z] 5487.70 IOPS, 21.44 MiB/s 00:24:11.284 Latency(us) 00:24:11.284 [2024-12-16T01:45:41.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.284 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:11.284 Verification LBA range: start 0x0 length 0x2000 00:24:11.284 TLSTESTn1 : 10.01 5493.01 21.46 0.00 0.00 23268.30 5211.67 22843.98 00:24:11.284 [2024-12-16T01:45:41.943Z] =================================================================================================================== 00:24:11.284 [2024-12-16T01:45:41.943Z] Total : 5493.01 21.46 0.00 0.00 23268.30 5211.67 22843.98 00:24:11.284 { 00:24:11.284 "results": [ 00:24:11.284 { 00:24:11.284 "job": "TLSTESTn1", 00:24:11.284 "core_mask": "0x4", 00:24:11.284 "workload": "verify", 00:24:11.284 "status": "finished", 00:24:11.284 "verify_range": { 00:24:11.284 "start": 0, 00:24:11.284 "length": 8192 00:24:11.284 }, 00:24:11.284 "queue_depth": 128, 00:24:11.284 "io_size": 4096, 00:24:11.284 "runtime": 10.013269, 00:24:11.284 "iops": 5493.011323275146, 00:24:11.284 "mibps": 21.45707548154354, 00:24:11.284 "io_failed": 0, 00:24:11.284 "io_timeout": 0, 00:24:11.284 "avg_latency_us": 23268.30470441872, 00:24:11.284 "min_latency_us": 5211.672380952381, 00:24:11.284 "max_latency_us": 22843.977142857144 00:24:11.284 } 00:24:11.284 ], 00:24:11.284 "core_count": 1 00:24:11.284 } 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:11.284 nvmf_trace.0 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1030726 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1030726 ']' 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1030726 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1030726 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1030726' 00:24:11.284 killing process with pid 1030726 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1030726 00:24:11.284 Received shutdown signal, test time was about 10.000000 seconds 00:24:11.284 00:24:11.284 Latency(us) 00:24:11.284 [2024-12-16T01:45:41.943Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.284 [2024-12-16T01:45:41.943Z] =================================================================================================================== 00:24:11.284 [2024-12-16T01:45:41.943Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:11.284 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1030726 00:24:11.543 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:11.543 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:11.543 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:11.543 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:11.543 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:11.543 02:45:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:11.543 rmmod nvme_tcp 00:24:11.543 rmmod nvme_fabrics 00:24:11.543 rmmod nvme_keyring 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1030591 ']' 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1030591 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1030591 ']' 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1030591 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1030591 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1030591' 00:24:11.543 killing process with pid 1030591 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1030591 00:24:11.543 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1030591 00:24:11.802 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:11.802 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:11.802 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:11.803 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:11.803 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:11.803 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:11.803 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:11.803 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:11.803 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:11.803 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.803 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.803 02:45:42 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:13.887 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:13.887 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.ddi 00:24:13.887 00:24:13.887 real 0m20.450s 00:24:13.887 user 0m21.183s 00:24:13.887 sys 0m9.586s 00:24:13.887 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.887 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:13.887 ************************************ 00:24:13.887 END TEST nvmf_fips 00:24:13.887 ************************************ 00:24:13.887 02:45:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:13.887 02:45:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:13.887 02:45:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:13.887 02:45:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:13.887 ************************************ 00:24:13.887 START TEST nvmf_control_msg_list 00:24:13.887 ************************************ 00:24:13.887 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:13.887 * Looking for test storage... 00:24:13.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:13.887 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:13.887 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:13.887 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:14.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.148 --rc genhtml_branch_coverage=1 00:24:14.148 --rc genhtml_function_coverage=1 00:24:14.148 --rc genhtml_legend=1 00:24:14.148 --rc geninfo_all_blocks=1 00:24:14.148 --rc geninfo_unexecuted_blocks=1 00:24:14.148 00:24:14.148 ' 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:14.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.148 --rc genhtml_branch_coverage=1 00:24:14.148 --rc genhtml_function_coverage=1 00:24:14.148 --rc genhtml_legend=1 00:24:14.148 --rc geninfo_all_blocks=1 00:24:14.148 --rc geninfo_unexecuted_blocks=1 00:24:14.148 00:24:14.148 ' 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:14.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.148 --rc genhtml_branch_coverage=1 00:24:14.148 --rc genhtml_function_coverage=1 00:24:14.148 --rc genhtml_legend=1 00:24:14.148 --rc geninfo_all_blocks=1 00:24:14.148 --rc geninfo_unexecuted_blocks=1 00:24:14.148 00:24:14.148 ' 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:14.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:14.148 --rc genhtml_branch_coverage=1 00:24:14.148 --rc genhtml_function_coverage=1 00:24:14.148 --rc genhtml_legend=1 00:24:14.148 --rc geninfo_all_blocks=1 00:24:14.148 --rc geninfo_unexecuted_blocks=1 00:24:14.148 00:24:14.148 ' 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.148 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:14.149 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:14.149 02:45:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.716 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:20.716 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:20.717 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:20.717 Found net devices under 0000:af:00.0: cvl_0_0 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:20.717 Found net devices under 0000:af:00.1: cvl_0_1 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:20.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:20.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:24:20.717 00:24:20.717 --- 10.0.0.2 ping statistics --- 00:24:20.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.717 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:20.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:20.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:24:20.717 00:24:20.717 --- 10.0.0.1 ping statistics --- 00:24:20.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:20.717 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1035982 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1035982 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1035982 ']' 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:20.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:20.717 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.718 [2024-12-16 02:45:50.656490] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:20.718 [2024-12-16 02:45:50.656536] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.718 [2024-12-16 02:45:50.733297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.718 [2024-12-16 02:45:50.756070] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.718 [2024-12-16 02:45:50.756108] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.718 [2024-12-16 02:45:50.756116] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.718 [2024-12-16 02:45:50.756123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.718 [2024-12-16 02:45:50.756129] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.718 [2024-12-16 02:45:50.756616] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.718 [2024-12-16 02:45:50.899580] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.718 Malloc0 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:20.718 [2024-12-16 02:45:50.943933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1036107 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1036108 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1036109 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1036107 00:24:20.718 02:45:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:20.718 [2024-12-16 02:45:51.018344] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:20.718 [2024-12-16 02:45:51.038394] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:20.718 [2024-12-16 02:45:51.038535] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:21.653 Initializing NVMe Controllers 00:24:21.653 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:21.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:21.653 Initialization complete. Launching workers. 00:24:21.653 ======================================================== 00:24:21.653 Latency(us) 00:24:21.654 Device Information : IOPS MiB/s Average min max 00:24:21.654 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4336.00 16.94 230.25 126.43 411.61 00:24:21.654 ======================================================== 00:24:21.654 Total : 4336.00 16.94 230.25 126.43 411.61 00:24:21.654 00:24:21.654 Initializing NVMe Controllers 00:24:21.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:21.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:21.654 Initialization complete. Launching workers. 00:24:21.654 ======================================================== 00:24:21.654 Latency(us) 00:24:21.654 Device Information : IOPS MiB/s Average min max 00:24:21.654 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 4334.00 16.93 230.32 131.26 403.28 00:24:21.654 ======================================================== 00:24:21.654 Total : 4334.00 16.93 230.32 131.26 403.28 00:24:21.654 00:24:21.654 Initializing NVMe Controllers 00:24:21.654 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:21.654 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:21.654 Initialization complete. Launching workers. 00:24:21.654 ======================================================== 00:24:21.654 Latency(us) 00:24:21.654 Device Information : IOPS MiB/s Average min max 00:24:21.654 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4345.97 16.98 229.70 135.55 420.21 00:24:21.654 ======================================================== 00:24:21.654 Total : 4345.97 16.98 229.70 135.55 420.21 00:24:21.654 00:24:21.654 [2024-12-16 02:45:52.162424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1574640 is same with the state(6) to be set 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1036108 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1036109 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:21.654 rmmod nvme_tcp 00:24:21.654 rmmod nvme_fabrics 00:24:21.654 rmmod nvme_keyring 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1035982 ']' 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1035982 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1035982 ']' 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1035982 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1035982 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1035982' 00:24:21.654 killing process with pid 1035982 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1035982 00:24:21.654 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1035982 00:24:21.913 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.913 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:21.913 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:21.913 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:21.913 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:21.913 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:21.913 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:21.913 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.913 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:21.913 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.913 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.913 02:45:52 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:24.450 00:24:24.450 real 0m10.108s 00:24:24.450 user 0m6.487s 00:24:24.450 sys 0m5.553s 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:24.450 ************************************ 00:24:24.450 END TEST nvmf_control_msg_list 00:24:24.450 ************************************ 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:24.450 ************************************ 00:24:24.450 START TEST nvmf_wait_for_buf 00:24:24.450 ************************************ 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:24.450 * Looking for test storage... 00:24:24.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:24.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.450 --rc genhtml_branch_coverage=1 00:24:24.450 --rc genhtml_function_coverage=1 00:24:24.450 --rc genhtml_legend=1 00:24:24.450 --rc geninfo_all_blocks=1 00:24:24.450 --rc geninfo_unexecuted_blocks=1 00:24:24.450 00:24:24.450 ' 00:24:24.450 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:24.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.450 --rc genhtml_branch_coverage=1 00:24:24.450 --rc genhtml_function_coverage=1 00:24:24.450 --rc genhtml_legend=1 00:24:24.450 --rc geninfo_all_blocks=1 00:24:24.451 --rc geninfo_unexecuted_blocks=1 00:24:24.451 00:24:24.451 ' 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:24.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.451 --rc genhtml_branch_coverage=1 00:24:24.451 --rc genhtml_function_coverage=1 00:24:24.451 --rc genhtml_legend=1 00:24:24.451 --rc geninfo_all_blocks=1 00:24:24.451 --rc geninfo_unexecuted_blocks=1 00:24:24.451 00:24:24.451 ' 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:24.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:24.451 --rc genhtml_branch_coverage=1 00:24:24.451 --rc genhtml_function_coverage=1 00:24:24.451 --rc genhtml_legend=1 00:24:24.451 --rc geninfo_all_blocks=1 00:24:24.451 --rc geninfo_unexecuted_blocks=1 00:24:24.451 00:24:24.451 ' 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:24.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:24.451 02:45:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:31.020 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:31.020 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:31.020 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:31.021 Found net devices under 0000:af:00.0: cvl_0_0 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:31.021 Found net devices under 0000:af:00.1: cvl_0_1 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:31.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:24:31.021 00:24:31.021 --- 10.0.0.2 ping statistics --- 00:24:31.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.021 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:31.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:24:31.021 00:24:31.021 --- 10.0.0.1 ping statistics --- 00:24:31.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.021 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1039802 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1039802 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1039802 ']' 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.021 [2024-12-16 02:46:00.789113] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:31.021 [2024-12-16 02:46:00.789156] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.021 [2024-12-16 02:46:00.866918] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.021 [2024-12-16 02:46:00.888661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.021 [2024-12-16 02:46:00.888694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.021 [2024-12-16 02:46:00.888705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.021 [2024-12-16 02:46:00.888711] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.021 [2024-12-16 02:46:00.888716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.021 [2024-12-16 02:46:00.889202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.021 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.022 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.022 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:31.022 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.022 02:46:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.022 Malloc0 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.022 [2024-12-16 02:46:01.082804] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.022 [2024-12-16 02:46:01.110996] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.022 02:46:01 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:31.022 [2024-12-16 02:46:01.196907] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:31.956 Initializing NVMe Controllers 00:24:31.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:31.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:31.956 Initialization complete. Launching workers. 00:24:31.956 ======================================================== 00:24:31.956 Latency(us) 00:24:31.956 Device Information : IOPS MiB/s Average min max 00:24:31.956 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.81 15.60 33174.88 7269.44 63854.92 00:24:31.956 ======================================================== 00:24:31.956 Total : 124.81 15.60 33174.88 7269.44 63854.92 00:24:31.956 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1974 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1974 -eq 0 ]] 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:31.956 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:31.956 rmmod nvme_tcp 00:24:31.956 rmmod nvme_fabrics 00:24:32.216 rmmod nvme_keyring 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1039802 ']' 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1039802 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1039802 ']' 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1039802 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1039802 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1039802' 00:24:32.216 killing process with pid 1039802 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1039802 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1039802 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.216 02:46:02 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.756 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:34.756 00:24:34.756 real 0m10.291s 00:24:34.756 user 0m3.909s 00:24:34.756 sys 0m4.835s 00:24:34.756 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:34.756 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:34.756 ************************************ 00:24:34.756 END TEST nvmf_wait_for_buf 00:24:34.756 ************************************ 00:24:34.756 02:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:34.756 02:46:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:34.756 02:46:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:34.756 02:46:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:34.756 02:46:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:34.756 ************************************ 00:24:34.756 START TEST nvmf_fuzz 00:24:34.756 ************************************ 00:24:34.756 02:46:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:34.756 * Looking for test storage... 00:24:34.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:34.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.756 --rc genhtml_branch_coverage=1 00:24:34.756 --rc genhtml_function_coverage=1 00:24:34.756 --rc genhtml_legend=1 00:24:34.756 --rc geninfo_all_blocks=1 00:24:34.756 --rc geninfo_unexecuted_blocks=1 00:24:34.756 00:24:34.756 ' 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:34.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.756 --rc genhtml_branch_coverage=1 00:24:34.756 --rc genhtml_function_coverage=1 00:24:34.756 --rc genhtml_legend=1 00:24:34.756 --rc geninfo_all_blocks=1 00:24:34.756 --rc geninfo_unexecuted_blocks=1 00:24:34.756 00:24:34.756 ' 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:34.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.756 --rc genhtml_branch_coverage=1 00:24:34.756 --rc genhtml_function_coverage=1 00:24:34.756 --rc genhtml_legend=1 00:24:34.756 --rc geninfo_all_blocks=1 00:24:34.756 --rc geninfo_unexecuted_blocks=1 00:24:34.756 00:24:34.756 ' 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:34.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:34.756 --rc genhtml_branch_coverage=1 00:24:34.756 --rc genhtml_function_coverage=1 00:24:34.756 --rc genhtml_legend=1 00:24:34.756 --rc geninfo_all_blocks=1 00:24:34.756 --rc geninfo_unexecuted_blocks=1 00:24:34.756 00:24:34.756 ' 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:34.756 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:34.757 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:34.757 02:46:05 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:41.326 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:41.326 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:41.326 Found net devices under 0000:af:00.0: cvl_0_0 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:41.326 Found net devices under 0000:af:00.1: cvl_0_1 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:41.326 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:41.327 02:46:10 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:41.327 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:41.327 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:24:41.327 00:24:41.327 --- 10.0.0.2 ping statistics --- 00:24:41.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.327 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:41.327 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:41.327 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:24:41.327 00:24:41.327 --- 10.0.0.1 ping statistics --- 00:24:41.327 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:41.327 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1043508 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1043508 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1043508 ']' 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.327 Malloc0 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:41.327 02:46:11 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:13.398 Fuzzing completed. Shutting down the fuzz application 00:25:13.398 00:25:13.398 Dumping successful admin opcodes: 00:25:13.398 9, 10, 00:25:13.398 Dumping successful io opcodes: 00:25:13.398 0, 9, 00:25:13.398 NS: 0x2000008eff00 I/O qp, Total commands completed: 896459, total successful commands: 5221, random_seed: 2528335616 00:25:13.398 NS: 0x2000008eff00 admin qp, Total commands completed: 85664, total successful commands: 20, random_seed: 2214257984 00:25:13.398 02:46:41 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:13.398 Fuzzing completed. Shutting down the fuzz application 00:25:13.398 00:25:13.398 Dumping successful admin opcodes: 00:25:13.398 00:25:13.398 Dumping successful io opcodes: 00:25:13.398 00:25:13.398 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1643243046 00:25:13.398 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 1643305652 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:13.398 rmmod nvme_tcp 00:25:13.398 rmmod nvme_fabrics 00:25:13.398 rmmod nvme_keyring 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 1043508 ']' 00:25:13.398 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 1043508 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1043508 ']' 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 1043508 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1043508 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1043508' 00:25:13.399 killing process with pid 1043508 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 1043508 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 1043508 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:13.399 02:46:43 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.777 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:14.777 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:15.035 00:25:15.035 real 0m40.481s 00:25:15.035 user 0m51.955s 00:25:15.035 sys 0m17.453s 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:15.035 ************************************ 00:25:15.035 END TEST nvmf_fuzz 00:25:15.035 ************************************ 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:15.035 ************************************ 00:25:15.035 START TEST nvmf_multiconnection 00:25:15.035 ************************************ 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:15.035 * Looking for test storage... 00:25:15.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.035 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.295 --rc genhtml_branch_coverage=1 00:25:15.295 --rc genhtml_function_coverage=1 00:25:15.295 --rc genhtml_legend=1 00:25:15.295 --rc geninfo_all_blocks=1 00:25:15.295 --rc geninfo_unexecuted_blocks=1 00:25:15.295 00:25:15.295 ' 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.295 --rc genhtml_branch_coverage=1 00:25:15.295 --rc genhtml_function_coverage=1 00:25:15.295 --rc genhtml_legend=1 00:25:15.295 --rc geninfo_all_blocks=1 00:25:15.295 --rc geninfo_unexecuted_blocks=1 00:25:15.295 00:25:15.295 ' 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.295 --rc genhtml_branch_coverage=1 00:25:15.295 --rc genhtml_function_coverage=1 00:25:15.295 --rc genhtml_legend=1 00:25:15.295 --rc geninfo_all_blocks=1 00:25:15.295 --rc geninfo_unexecuted_blocks=1 00:25:15.295 00:25:15.295 ' 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:15.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.295 --rc genhtml_branch_coverage=1 00:25:15.295 --rc genhtml_function_coverage=1 00:25:15.295 --rc genhtml_legend=1 00:25:15.295 --rc geninfo_all_blocks=1 00:25:15.295 --rc geninfo_unexecuted_blocks=1 00:25:15.295 00:25:15.295 ' 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.295 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:15.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:15.296 02:46:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:21.869 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:21.869 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:21.869 Found net devices under 0000:af:00.0: cvl_0_0 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:21.869 Found net devices under 0000:af:00.1: cvl_0_1 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:21.869 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:21.870 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.870 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.390 ms 00:25:21.870 00:25:21.870 --- 10.0.0.2 ping statistics --- 00:25:21.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.870 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.870 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.870 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:25:21.870 00:25:21.870 --- 10.0.0.1 ping statistics --- 00:25:21.870 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.870 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=1052072 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 1052072 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 1052072 ']' 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 [2024-12-16 02:46:51.682666] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:25:21.870 [2024-12-16 02:46:51.682708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.870 [2024-12-16 02:46:51.761886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.870 [2024-12-16 02:46:51.785603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.870 [2024-12-16 02:46:51.785642] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.870 [2024-12-16 02:46:51.785649] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.870 [2024-12-16 02:46:51.785655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.870 [2024-12-16 02:46:51.785660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.870 [2024-12-16 02:46:51.786972] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.870 [2024-12-16 02:46:51.787084] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.870 [2024-12-16 02:46:51.787190] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.870 [2024-12-16 02:46:51.787191] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 [2024-12-16 02:46:51.918741] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 Malloc1 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 [2024-12-16 02:46:51.989739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.870 02:46:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 Malloc2 00:25:21.870 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.870 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:21.870 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.870 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.870 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:21.870 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.870 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.870 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.870 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 Malloc3 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 Malloc4 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 Malloc5 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 Malloc6 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 Malloc7 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 Malloc8 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.871 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 Malloc9 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 Malloc10 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 Malloc11 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:21.872 02:46:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:23.247 02:46:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:23.247 02:46:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:23.247 02:46:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:23.247 02:46:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:23.247 02:46:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:25.253 02:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:25.253 02:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:25.253 02:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:25.253 02:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:25.253 02:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:25.253 02:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:25.253 02:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:25.253 02:46:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:26.186 02:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:26.186 02:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:26.186 02:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:26.186 02:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:26.186 02:46:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:28.714 02:46:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:28.714 02:46:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:28.714 02:46:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:28.714 02:46:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:28.714 02:46:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:28.714 02:46:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:28.714 02:46:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:28.714 02:46:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:29.649 02:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:29.649 02:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:29.649 02:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:29.649 02:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:29.649 02:46:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:31.549 02:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:31.549 02:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:31.549 02:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:31.549 02:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:31.549 02:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:31.549 02:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:31.549 02:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.549 02:47:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:32.921 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:32.921 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:32.921 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.921 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:32.921 02:47:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:34.819 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:34.819 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:34.819 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:34.819 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:34.819 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.819 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:34.819 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.819 02:47:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:36.191 02:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:36.191 02:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:36.191 02:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:36.191 02:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:36.191 02:47:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:38.090 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:38.090 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:38.090 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:38.090 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:38.090 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:38.090 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:38.090 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:38.090 02:47:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:39.463 02:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:39.463 02:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:39.463 02:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:39.463 02:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:39.463 02:47:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:41.363 02:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:41.363 02:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:41.363 02:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:41.363 02:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:41.363 02:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:41.363 02:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:41.363 02:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.363 02:47:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:42.738 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:42.738 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:42.738 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:42.738 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:42.738 02:47:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:44.639 02:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:44.639 02:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:44.639 02:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:44.639 02:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:44.639 02:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.639 02:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:44.639 02:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.639 02:47:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:46.016 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:46.016 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:46.016 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:46.016 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:46.016 02:47:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:47.919 02:47:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:47.919 02:47:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:47.919 02:47:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:47.919 02:47:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:47.919 02:47:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.919 02:47:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:47.919 02:47:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.919 02:47:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:49.297 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:49.297 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:49.297 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:49.297 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:49.297 02:47:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:51.831 02:47:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:51.831 02:47:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:51.831 02:47:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:51.831 02:47:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:51.831 02:47:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:51.831 02:47:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:51.831 02:47:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:51.831 02:47:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:52.766 02:47:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:52.766 02:47:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:52.766 02:47:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:52.766 02:47:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:52.766 02:47:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:54.673 02:47:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:54.673 02:47:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:54.673 02:47:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:54.673 02:47:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:54.673 02:47:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:54.673 02:47:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:54.673 02:47:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:54.673 02:47:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:56.577 02:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:56.577 02:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:56.577 02:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:56.577 02:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:56.577 02:47:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:58.653 02:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:58.653 02:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:58.653 02:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:58.653 02:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:58.654 02:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:58.654 02:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:58.654 02:47:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:58.654 [global] 00:25:58.654 thread=1 00:25:58.654 invalidate=1 00:25:58.654 rw=read 00:25:58.654 time_based=1 00:25:58.654 runtime=10 00:25:58.654 ioengine=libaio 00:25:58.654 direct=1 00:25:58.654 bs=262144 00:25:58.654 iodepth=64 00:25:58.654 norandommap=1 00:25:58.654 numjobs=1 00:25:58.654 00:25:58.654 [job0] 00:25:58.654 filename=/dev/nvme0n1 00:25:58.654 [job1] 00:25:58.654 filename=/dev/nvme10n1 00:25:58.654 [job2] 00:25:58.654 filename=/dev/nvme1n1 00:25:58.654 [job3] 00:25:58.654 filename=/dev/nvme2n1 00:25:58.654 [job4] 00:25:58.654 filename=/dev/nvme3n1 00:25:58.654 [job5] 00:25:58.654 filename=/dev/nvme4n1 00:25:58.654 [job6] 00:25:58.654 filename=/dev/nvme5n1 00:25:58.654 [job7] 00:25:58.654 filename=/dev/nvme6n1 00:25:58.654 [job8] 00:25:58.654 filename=/dev/nvme7n1 00:25:58.654 [job9] 00:25:58.654 filename=/dev/nvme8n1 00:25:58.654 [job10] 00:25:58.654 filename=/dev/nvme9n1 00:25:58.654 Could not set queue depth (nvme0n1) 00:25:58.654 Could not set queue depth (nvme10n1) 00:25:58.654 Could not set queue depth (nvme1n1) 00:25:58.654 Could not set queue depth (nvme2n1) 00:25:58.654 Could not set queue depth (nvme3n1) 00:25:58.654 Could not set queue depth (nvme4n1) 00:25:58.654 Could not set queue depth (nvme5n1) 00:25:58.654 Could not set queue depth (nvme6n1) 00:25:58.654 Could not set queue depth (nvme7n1) 00:25:58.654 Could not set queue depth (nvme8n1) 00:25:58.654 Could not set queue depth (nvme9n1) 00:25:58.912 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.912 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.912 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.912 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.912 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.912 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.912 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.912 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.912 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.912 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.912 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:58.912 fio-3.35 00:25:58.912 Starting 11 threads 00:26:11.120 00:26:11.120 job0: (groupid=0, jobs=1): err= 0: pid=1058572: Mon Dec 16 02:47:39 2024 00:26:11.120 read: IOPS=258, BW=64.6MiB/s (67.8MB/s)(652MiB/10082msec) 00:26:11.120 slat (usec): min=21, max=332109, avg=2613.94, stdev=16062.11 00:26:11.120 clat (msec): min=6, max=856, avg=244.61, stdev=222.65 00:26:11.120 lat (msec): min=6, max=870, avg=247.22, stdev=225.04 00:26:11.120 clat percentiles (msec): 00:26:11.120 | 1.00th=[ 15], 5.00th=[ 33], 10.00th=[ 53], 20.00th=[ 79], 00:26:11.120 | 30.00th=[ 95], 40.00th=[ 110], 50.00th=[ 140], 60.00th=[ 171], 00:26:11.120 | 70.00th=[ 257], 80.00th=[ 502], 90.00th=[ 609], 95.00th=[ 701], 00:26:11.120 | 99.00th=[ 818], 99.50th=[ 844], 99.90th=[ 860], 99.95th=[ 860], 00:26:11.120 | 99.99th=[ 860] 00:26:11.120 bw ( KiB/s): min=15872, max=175616, per=7.04%, avg=65100.80, stdev=48354.02, samples=20 00:26:11.120 iops : min= 62, max= 686, avg=254.30, stdev=188.88, samples=20 00:26:11.120 lat (msec) : 10=0.54%, 20=1.46%, 50=7.36%, 100=23.94%, 250=36.17% 00:26:11.120 lat (msec) : 500=10.32%, 750=17.18%, 1000=3.03% 00:26:11.120 cpu : usr=0.07%, sys=1.16%, ctx=573, majf=0, minf=4097 00:26:11.120 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:11.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.120 issued rwts: total=2607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.120 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.120 job1: (groupid=0, jobs=1): err= 0: pid=1058590: Mon Dec 16 02:47:39 2024 00:26:11.120 read: IOPS=145, BW=36.5MiB/s (38.3MB/s)(370MiB/10132msec) 00:26:11.120 slat (usec): min=16, max=401480, avg=4741.26, stdev=21575.15 00:26:11.120 clat (msec): min=15, max=950, avg=433.14, stdev=225.96 00:26:11.120 lat (msec): min=16, max=1095, avg=437.88, stdev=229.22 00:26:11.120 clat percentiles (msec): 00:26:11.120 | 1.00th=[ 35], 5.00th=[ 79], 10.00th=[ 109], 20.00th=[ 174], 00:26:11.120 | 30.00th=[ 309], 40.00th=[ 388], 50.00th=[ 422], 60.00th=[ 502], 00:26:11.120 | 70.00th=[ 575], 80.00th=[ 634], 90.00th=[ 718], 95.00th=[ 827], 00:26:11.120 | 99.00th=[ 894], 99.50th=[ 911], 99.90th=[ 953], 99.95th=[ 953], 00:26:11.120 | 99.99th=[ 953] 00:26:11.120 bw ( KiB/s): min= 6144, max=81920, per=3.92%, avg=36245.20, stdev=17054.90, samples=20 00:26:11.120 iops : min= 24, max= 320, avg=141.55, stdev=66.61, samples=20 00:26:11.120 lat (msec) : 20=0.47%, 50=1.22%, 100=6.90%, 250=15.42%, 500=35.70% 00:26:11.120 lat (msec) : 750=31.64%, 1000=8.65% 00:26:11.120 cpu : usr=0.06%, sys=0.60%, ctx=305, majf=0, minf=4097 00:26:11.120 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.2%, >=64=95.7% 00:26:11.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.120 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.120 issued rwts: total=1479,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.120 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.120 job2: (groupid=0, jobs=1): err= 0: pid=1058601: Mon Dec 16 02:47:39 2024 00:26:11.120 read: IOPS=613, BW=153MiB/s (161MB/s)(1548MiB/10086msec) 00:26:11.120 slat (usec): min=14, max=443495, avg=1583.31, stdev=10904.10 00:26:11.120 clat (msec): min=17, max=1225, avg=102.57, stdev=153.93 00:26:11.120 lat (msec): min=17, max=1225, avg=104.15, stdev=156.27 00:26:11.120 clat percentiles (msec): 00:26:11.120 | 1.00th=[ 25], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 32], 00:26:11.120 | 30.00th=[ 34], 40.00th=[ 36], 50.00th=[ 41], 60.00th=[ 55], 00:26:11.120 | 70.00th=[ 81], 80.00th=[ 124], 90.00th=[ 205], 95.00th=[ 518], 00:26:11.120 | 99.00th=[ 810], 99.50th=[ 877], 99.90th=[ 1062], 99.95th=[ 1062], 00:26:11.120 | 99.99th=[ 1234] 00:26:11.120 bw ( KiB/s): min=17920, max=537600, per=16.96%, avg=156825.60, stdev=165185.89, samples=20 00:26:11.120 iops : min= 70, max= 2100, avg=612.60, stdev=645.26, samples=20 00:26:11.120 lat (msec) : 20=0.02%, 50=57.66%, 100=17.80%, 250=16.40%, 500=3.00% 00:26:11.120 lat (msec) : 750=3.99%, 1000=0.97%, 2000=0.16% 00:26:11.120 cpu : usr=0.14%, sys=2.55%, ctx=866, majf=0, minf=4097 00:26:11.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:26:11.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.120 issued rwts: total=6190,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.120 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.120 job3: (groupid=0, jobs=1): err= 0: pid=1058607: Mon Dec 16 02:47:39 2024 00:26:11.120 read: IOPS=557, BW=139MiB/s (146MB/s)(1404MiB/10076msec) 00:26:11.120 slat (usec): min=9, max=184085, avg=1710.87, stdev=8329.89 00:26:11.120 clat (msec): min=13, max=871, avg=112.98, stdev=128.15 00:26:11.120 lat (msec): min=15, max=871, avg=114.69, stdev=130.01 00:26:11.120 clat percentiles (msec): 00:26:11.120 | 1.00th=[ 25], 5.00th=[ 29], 10.00th=[ 30], 20.00th=[ 33], 00:26:11.120 | 30.00th=[ 34], 40.00th=[ 36], 50.00th=[ 39], 60.00th=[ 93], 00:26:11.120 | 70.00th=[ 127], 80.00th=[ 186], 90.00th=[ 279], 95.00th=[ 334], 00:26:11.120 | 99.00th=[ 701], 99.50th=[ 743], 99.90th=[ 835], 99.95th=[ 835], 00:26:11.120 | 99.99th=[ 869] 00:26:11.120 bw ( KiB/s): min=19456, max=497152, per=15.37%, avg=142131.20, stdev=144420.54, samples=20 00:26:11.120 iops : min= 76, max= 1942, avg=555.20, stdev=564.14, samples=20 00:26:11.120 lat (msec) : 20=0.16%, 50=53.08%, 100=9.42%, 250=22.83%, 500=12.11% 00:26:11.120 lat (msec) : 750=1.98%, 1000=0.43% 00:26:11.120 cpu : usr=0.30%, sys=2.13%, ctx=783, majf=0, minf=4097 00:26:11.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:26:11.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.120 issued rwts: total=5616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.120 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.120 job4: (groupid=0, jobs=1): err= 0: pid=1058610: Mon Dec 16 02:47:39 2024 00:26:11.120 read: IOPS=441, BW=110MiB/s (116MB/s)(1111MiB/10073msec) 00:26:11.120 slat (usec): min=9, max=160316, avg=1796.17, stdev=8296.89 00:26:11.120 clat (msec): min=17, max=908, avg=143.13, stdev=156.02 00:26:11.120 lat (msec): min=18, max=947, avg=144.92, stdev=157.27 00:26:11.120 clat percentiles (msec): 00:26:11.120 | 1.00th=[ 23], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 43], 00:26:11.120 | 30.00th=[ 58], 40.00th=[ 71], 50.00th=[ 85], 60.00th=[ 103], 00:26:11.120 | 70.00th=[ 134], 80.00th=[ 190], 90.00th=[ 359], 95.00th=[ 481], 00:26:11.120 | 99.00th=[ 768], 99.50th=[ 860], 99.90th=[ 869], 99.95th=[ 869], 00:26:11.120 | 99.99th=[ 911] 00:26:11.120 bw ( KiB/s): min=26112, max=448000, per=12.13%, avg=112128.00, stdev=106906.93, samples=20 00:26:11.120 iops : min= 102, max= 1750, avg=438.00, stdev=417.61, samples=20 00:26:11.120 lat (msec) : 20=0.09%, 50=21.76%, 100=37.02%, 250=24.31%, 500=12.18% 00:26:11.120 lat (msec) : 750=3.40%, 1000=1.24% 00:26:11.120 cpu : usr=0.13%, sys=1.79%, ctx=786, majf=0, minf=4097 00:26:11.120 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:11.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.120 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.120 issued rwts: total=4443,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.120 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.120 job5: (groupid=0, jobs=1): err= 0: pid=1058615: Mon Dec 16 02:47:39 2024 00:26:11.120 read: IOPS=201, BW=50.4MiB/s (52.8MB/s)(510MiB/10121msec) 00:26:11.120 slat (usec): min=14, max=515706, avg=3919.61, stdev=22066.56 00:26:11.120 clat (msec): min=13, max=1049, avg=313.31, stdev=248.72 00:26:11.120 lat (msec): min=14, max=1297, avg=317.23, stdev=251.00 00:26:11.120 clat percentiles (msec): 00:26:11.120 | 1.00th=[ 69], 5.00th=[ 80], 10.00th=[ 87], 20.00th=[ 106], 00:26:11.120 | 30.00th=[ 124], 40.00th=[ 163], 50.00th=[ 218], 60.00th=[ 317], 00:26:11.121 | 70.00th=[ 393], 80.00th=[ 489], 90.00th=[ 709], 95.00th=[ 885], 00:26:11.121 | 99.00th=[ 995], 99.50th=[ 1011], 99.90th=[ 1053], 99.95th=[ 1053], 00:26:11.121 | 99.99th=[ 1053] 00:26:11.121 bw ( KiB/s): min=10752, max=157184, per=5.47%, avg=50622.20, stdev=40277.01, samples=20 00:26:11.121 iops : min= 42, max= 614, avg=197.70, stdev=157.27, samples=20 00:26:11.121 lat (msec) : 20=0.15%, 50=0.34%, 100=16.47%, 250=35.93%, 500=27.70% 00:26:11.121 lat (msec) : 750=10.59%, 1000=7.84%, 2000=0.98% 00:26:11.121 cpu : usr=0.04%, sys=0.78%, ctx=312, majf=0, minf=4097 00:26:11.121 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:11.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.121 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.121 issued rwts: total=2040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.121 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.121 job6: (groupid=0, jobs=1): err= 0: pid=1058616: Mon Dec 16 02:47:39 2024 00:26:11.121 read: IOPS=190, BW=47.6MiB/s (49.9MB/s)(482MiB/10127msec) 00:26:11.121 slat (usec): min=14, max=578045, avg=2246.96, stdev=18780.12 00:26:11.121 clat (msec): min=11, max=1132, avg=333.38, stdev=254.70 00:26:11.121 lat (msec): min=11, max=1250, avg=335.63, stdev=256.55 00:26:11.121 clat percentiles (msec): 00:26:11.121 | 1.00th=[ 20], 5.00th=[ 27], 10.00th=[ 36], 20.00th=[ 92], 00:26:11.121 | 30.00th=[ 136], 40.00th=[ 188], 50.00th=[ 317], 60.00th=[ 380], 00:26:11.121 | 70.00th=[ 456], 80.00th=[ 531], 90.00th=[ 693], 95.00th=[ 835], 00:26:11.121 | 99.00th=[ 1062], 99.50th=[ 1070], 99.90th=[ 1116], 99.95th=[ 1133], 00:26:11.121 | 99.99th=[ 1133] 00:26:11.121 bw ( KiB/s): min= 6656, max=133120, per=5.17%, avg=47767.50, stdev=30762.37, samples=20 00:26:11.121 iops : min= 26, max= 520, avg=186.55, stdev=120.20, samples=20 00:26:11.121 lat (msec) : 20=1.14%, 50=13.06%, 100=7.00%, 250=22.45%, 500=33.28% 00:26:11.121 lat (msec) : 750=16.33%, 1000=4.51%, 2000=2.23% 00:26:11.121 cpu : usr=0.01%, sys=0.84%, ctx=437, majf=0, minf=4097 00:26:11.121 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:11.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.121 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.121 issued rwts: total=1929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.121 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.121 job7: (groupid=0, jobs=1): err= 0: pid=1058617: Mon Dec 16 02:47:39 2024 00:26:11.121 read: IOPS=257, BW=64.4MiB/s (67.5MB/s)(649MiB/10075msec) 00:26:11.121 slat (usec): min=16, max=152729, avg=2316.15, stdev=11007.78 00:26:11.121 clat (msec): min=15, max=786, avg=245.87, stdev=141.65 00:26:11.121 lat (msec): min=16, max=786, avg=248.18, stdev=143.41 00:26:11.121 clat percentiles (msec): 00:26:11.121 | 1.00th=[ 28], 5.00th=[ 75], 10.00th=[ 101], 20.00th=[ 142], 00:26:11.121 | 30.00th=[ 165], 40.00th=[ 190], 50.00th=[ 213], 60.00th=[ 247], 00:26:11.121 | 70.00th=[ 271], 80.00th=[ 321], 90.00th=[ 435], 95.00th=[ 558], 00:26:11.121 | 99.00th=[ 709], 99.50th=[ 735], 99.90th=[ 785], 99.95th=[ 785], 00:26:11.121 | 99.99th=[ 785] 00:26:11.121 bw ( KiB/s): min=19968, max=125952, per=7.01%, avg=64793.60, stdev=26750.39, samples=20 00:26:11.121 iops : min= 78, max= 492, avg=253.10, stdev=104.49, samples=20 00:26:11.121 lat (msec) : 20=0.08%, 50=2.77%, 100=7.24%, 250=50.25%, 500=32.76% 00:26:11.121 lat (msec) : 750=6.59%, 1000=0.31% 00:26:11.121 cpu : usr=0.09%, sys=0.87%, ctx=439, majf=0, minf=4097 00:26:11.121 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:11.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.121 issued rwts: total=2595,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.121 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.121 job8: (groupid=0, jobs=1): err= 0: pid=1058619: Mon Dec 16 02:47:39 2024 00:26:11.121 read: IOPS=235, BW=58.8MiB/s (61.7MB/s)(592MiB/10073msec) 00:26:11.121 slat (usec): min=14, max=323784, avg=3466.55, stdev=17179.05 00:26:11.121 clat (usec): min=1882, max=1042.6k, avg=268285.55, stdev=199183.69 00:26:11.121 lat (usec): min=1921, max=1185.5k, avg=271752.10, stdev=202101.56 00:26:11.121 clat percentiles (msec): 00:26:11.121 | 1.00th=[ 5], 5.00th=[ 11], 10.00th=[ 23], 20.00th=[ 78], 00:26:11.121 | 30.00th=[ 138], 40.00th=[ 194], 50.00th=[ 264], 60.00th=[ 305], 00:26:11.121 | 70.00th=[ 330], 80.00th=[ 409], 90.00th=[ 531], 95.00th=[ 634], 00:26:11.121 | 99.00th=[ 1011], 99.50th=[ 1045], 99.90th=[ 1045], 99.95th=[ 1045], 00:26:11.121 | 99.99th=[ 1045] 00:26:11.121 bw ( KiB/s): min=11264, max=212480, per=6.38%, avg=59011.15, stdev=45980.29, samples=20 00:26:11.121 iops : min= 44, max= 830, avg=230.50, stdev=179.62, samples=20 00:26:11.121 lat (msec) : 2=0.08%, 4=0.68%, 10=3.46%, 20=5.28%, 50=3.88% 00:26:11.121 lat (msec) : 100=10.22%, 250=25.03%, 500=38.58%, 750=10.93%, 1000=0.84% 00:26:11.121 lat (msec) : 2000=1.01% 00:26:11.121 cpu : usr=0.02%, sys=1.01%, ctx=408, majf=0, minf=4097 00:26:11.121 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:11.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.121 issued rwts: total=2369,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.121 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.121 job9: (groupid=0, jobs=1): err= 0: pid=1058620: Mon Dec 16 02:47:39 2024 00:26:11.121 read: IOPS=349, BW=87.4MiB/s (91.7MB/s)(886MiB/10130msec) 00:26:11.121 slat (usec): min=13, max=228015, avg=2447.42, stdev=10582.77 00:26:11.121 clat (msec): min=16, max=967, avg=180.30, stdev=198.89 00:26:11.121 lat (msec): min=16, max=967, avg=182.75, stdev=201.11 00:26:11.121 clat percentiles (msec): 00:26:11.121 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 33], 00:26:11.121 | 30.00th=[ 35], 40.00th=[ 45], 50.00th=[ 87], 60.00th=[ 140], 00:26:11.121 | 70.00th=[ 215], 80.00th=[ 347], 90.00th=[ 456], 95.00th=[ 600], 00:26:11.121 | 99.00th=[ 877], 99.50th=[ 919], 99.90th=[ 944], 99.95th=[ 944], 00:26:11.121 | 99.99th=[ 969] 00:26:11.121 bw ( KiB/s): min=13312, max=504320, per=9.63%, avg=89062.40, stdev=115375.44, samples=20 00:26:11.121 iops : min= 52, max= 1970, avg=347.90, stdev=450.69, samples=20 00:26:11.121 lat (msec) : 20=0.08%, 50=40.14%, 100=13.27%, 250=18.32%, 500=19.73% 00:26:11.121 lat (msec) : 750=6.46%, 1000=2.00% 00:26:11.121 cpu : usr=0.22%, sys=1.40%, ctx=559, majf=0, minf=3722 00:26:11.121 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:11.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.121 issued rwts: total=3543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.121 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.121 job10: (groupid=0, jobs=1): err= 0: pid=1058621: Mon Dec 16 02:47:39 2024 00:26:11.121 read: IOPS=373, BW=93.4MiB/s (97.9MB/s)(946MiB/10131msec) 00:26:11.121 slat (usec): min=18, max=367576, avg=2531.02, stdev=13830.77 00:26:11.121 clat (msec): min=2, max=1079, avg=168.56, stdev=198.79 00:26:11.121 lat (msec): min=2, max=1080, avg=171.09, stdev=201.91 00:26:11.121 clat percentiles (msec): 00:26:11.121 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 38], 20.00th=[ 44], 00:26:11.121 | 30.00th=[ 48], 40.00th=[ 71], 50.00th=[ 83], 60.00th=[ 90], 00:26:11.121 | 70.00th=[ 140], 80.00th=[ 292], 90.00th=[ 460], 95.00th=[ 609], 00:26:11.121 | 99.00th=[ 885], 99.50th=[ 1036], 99.90th=[ 1045], 99.95th=[ 1045], 00:26:11.121 | 99.99th=[ 1083] 00:26:11.121 bw ( KiB/s): min=12800, max=348344, per=10.31%, avg=95292.40, stdev=90326.17, samples=20 00:26:11.121 iops : min= 50, max= 1360, avg=372.20, stdev=352.73, samples=20 00:26:11.121 lat (msec) : 4=0.11%, 10=1.93%, 20=5.15%, 50=25.47%, 100=31.02% 00:26:11.121 lat (msec) : 250=14.90%, 500=11.81%, 750=7.74%, 1000=1.37%, 2000=0.50% 00:26:11.121 cpu : usr=0.19%, sys=1.73%, ctx=965, majf=0, minf=4097 00:26:11.121 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:11.121 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:11.121 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:11.121 issued rwts: total=3785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:11.121 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:11.121 00:26:11.121 Run status group 0 (all jobs): 00:26:11.121 READ: bw=903MiB/s (947MB/s), 36.5MiB/s-153MiB/s (38.3MB/s-161MB/s), io=9149MiB (9593MB), run=10073-10132msec 00:26:11.121 00:26:11.121 Disk stats (read/write): 00:26:11.121 nvme0n1: ios=5068/0, merge=0/0, ticks=1238448/0, in_queue=1238448, util=97.30% 00:26:11.121 nvme10n1: ios=2807/0, merge=0/0, ticks=1184163/0, in_queue=1184163, util=97.47% 00:26:11.121 nvme1n1: ios=12206/0, merge=0/0, ticks=1232196/0, in_queue=1232196, util=97.79% 00:26:11.121 nvme2n1: ios=11067/0, merge=0/0, ticks=1224919/0, in_queue=1224919, util=97.94% 00:26:11.121 nvme3n1: ios=8677/0, merge=0/0, ticks=1244579/0, in_queue=1244579, util=98.03% 00:26:11.121 nvme4n1: ios=3953/0, merge=0/0, ticks=1198266/0, in_queue=1198266, util=98.32% 00:26:11.121 nvme5n1: ios=3708/0, merge=0/0, ticks=1211179/0, in_queue=1211179, util=98.45% 00:26:11.121 nvme6n1: ios=5014/0, merge=0/0, ticks=1232802/0, in_queue=1232802, util=98.61% 00:26:11.121 nvme7n1: ios=4579/0, merge=0/0, ticks=1231642/0, in_queue=1231642, util=98.94% 00:26:11.121 nvme8n1: ios=6945/0, merge=0/0, ticks=1171981/0, in_queue=1171981, util=99.15% 00:26:11.121 nvme9n1: ios=7438/0, merge=0/0, ticks=1195151/0, in_queue=1195151, util=99.26% 00:26:11.121 02:47:39 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:11.121 [global] 00:26:11.121 thread=1 00:26:11.121 invalidate=1 00:26:11.121 rw=randwrite 00:26:11.121 time_based=1 00:26:11.121 runtime=10 00:26:11.121 ioengine=libaio 00:26:11.121 direct=1 00:26:11.121 bs=262144 00:26:11.121 iodepth=64 00:26:11.121 norandommap=1 00:26:11.121 numjobs=1 00:26:11.121 00:26:11.121 [job0] 00:26:11.121 filename=/dev/nvme0n1 00:26:11.121 [job1] 00:26:11.121 filename=/dev/nvme10n1 00:26:11.121 [job2] 00:26:11.121 filename=/dev/nvme1n1 00:26:11.121 [job3] 00:26:11.121 filename=/dev/nvme2n1 00:26:11.121 [job4] 00:26:11.121 filename=/dev/nvme3n1 00:26:11.121 [job5] 00:26:11.121 filename=/dev/nvme4n1 00:26:11.121 [job6] 00:26:11.121 filename=/dev/nvme5n1 00:26:11.121 [job7] 00:26:11.122 filename=/dev/nvme6n1 00:26:11.122 [job8] 00:26:11.122 filename=/dev/nvme7n1 00:26:11.122 [job9] 00:26:11.122 filename=/dev/nvme8n1 00:26:11.122 [job10] 00:26:11.122 filename=/dev/nvme9n1 00:26:11.122 Could not set queue depth (nvme0n1) 00:26:11.122 Could not set queue depth (nvme10n1) 00:26:11.122 Could not set queue depth (nvme1n1) 00:26:11.122 Could not set queue depth (nvme2n1) 00:26:11.122 Could not set queue depth (nvme3n1) 00:26:11.122 Could not set queue depth (nvme4n1) 00:26:11.122 Could not set queue depth (nvme5n1) 00:26:11.122 Could not set queue depth (nvme6n1) 00:26:11.122 Could not set queue depth (nvme7n1) 00:26:11.122 Could not set queue depth (nvme8n1) 00:26:11.122 Could not set queue depth (nvme9n1) 00:26:11.122 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.122 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.122 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.122 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.122 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.122 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.122 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.122 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.122 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.122 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.122 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:11.122 fio-3.35 00:26:11.122 Starting 11 threads 00:26:21.099 00:26:21.099 job0: (groupid=0, jobs=1): err= 0: pid=1059643: Mon Dec 16 02:47:51 2024 00:26:21.099 write: IOPS=495, BW=124MiB/s (130MB/s)(1244MiB/10047msec); 0 zone resets 00:26:21.099 slat (usec): min=32, max=102008, avg=1632.36, stdev=4996.39 00:26:21.099 clat (usec): min=882, max=613284, avg=127555.09, stdev=128644.91 00:26:21.099 lat (usec): min=949, max=613345, avg=129187.45, stdev=130163.23 00:26:21.099 clat percentiles (msec): 00:26:21.099 | 1.00th=[ 7], 5.00th=[ 46], 10.00th=[ 53], 20.00th=[ 55], 00:26:21.099 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 59], 60.00th=[ 73], 00:26:21.099 | 70.00th=[ 106], 80.00th=[ 207], 90.00th=[ 359], 95.00th=[ 435], 00:26:21.099 | 99.00th=[ 518], 99.50th=[ 531], 99.90th=[ 550], 99.95th=[ 584], 00:26:21.099 | 99.99th=[ 617] 00:26:21.099 bw ( KiB/s): min=32768, max=289280, per=11.64%, avg=125716.70, stdev=97102.30, samples=20 00:26:21.099 iops : min= 128, max= 1130, avg=491.00, stdev=379.30, samples=20 00:26:21.099 lat (usec) : 1000=0.04% 00:26:21.099 lat (msec) : 2=0.20%, 4=0.12%, 10=1.21%, 20=0.58%, 50=4.24% 00:26:21.099 lat (msec) : 100=62.24%, 250=14.46%, 500=15.24%, 750=1.67% 00:26:21.099 cpu : usr=1.37%, sys=1.49%, ctx=1814, majf=0, minf=1 00:26:21.099 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:21.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.099 issued rwts: total=0,4974,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.099 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.099 job1: (groupid=0, jobs=1): err= 0: pid=1059656: Mon Dec 16 02:47:51 2024 00:26:21.099 write: IOPS=266, BW=66.5MiB/s (69.8MB/s)(681MiB/10234msec); 0 zone resets 00:26:21.099 slat (usec): min=27, max=163572, avg=2708.22, stdev=8664.38 00:26:21.099 clat (usec): min=1211, max=683913, avg=237539.42, stdev=171316.88 00:26:21.099 lat (usec): min=1905, max=683953, avg=240247.64, stdev=173475.09 00:26:21.099 clat percentiles (msec): 00:26:21.099 | 1.00th=[ 11], 5.00th=[ 22], 10.00th=[ 39], 20.00th=[ 71], 00:26:21.099 | 30.00th=[ 82], 40.00th=[ 133], 50.00th=[ 192], 60.00th=[ 321], 00:26:21.099 | 70.00th=[ 372], 80.00th=[ 430], 90.00th=[ 464], 95.00th=[ 506], 00:26:21.099 | 99.00th=[ 584], 99.50th=[ 609], 99.90th=[ 651], 99.95th=[ 684], 00:26:21.099 | 99.99th=[ 684] 00:26:21.099 bw ( KiB/s): min=30720, max=141824, per=6.30%, avg=68099.55, stdev=33472.30, samples=20 00:26:21.100 iops : min= 120, max= 554, avg=266.00, stdev=130.73, samples=20 00:26:21.100 lat (msec) : 2=0.07%, 4=0.33%, 10=0.40%, 20=3.49%, 50=7.82% 00:26:21.100 lat (msec) : 100=23.05%, 250=21.40%, 500=37.78%, 750=5.65% 00:26:21.100 cpu : usr=0.61%, sys=0.87%, ctx=1484, majf=0, minf=1 00:26:21.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:21.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.100 issued rwts: total=0,2724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.100 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.100 job2: (groupid=0, jobs=1): err= 0: pid=1059660: Mon Dec 16 02:47:51 2024 00:26:21.100 write: IOPS=352, BW=88.2MiB/s (92.5MB/s)(902MiB/10231msec); 0 zone resets 00:26:21.100 slat (usec): min=23, max=163922, avg=1844.56, stdev=7862.15 00:26:21.100 clat (usec): min=699, max=727031, avg=179490.30, stdev=175530.13 00:26:21.100 lat (usec): min=749, max=727077, avg=181334.86, stdev=177509.70 00:26:21.100 clat percentiles (msec): 00:26:21.100 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 24], 20.00th=[ 42], 00:26:21.100 | 30.00th=[ 53], 40.00th=[ 78], 50.00th=[ 116], 60.00th=[ 163], 00:26:21.100 | 70.00th=[ 190], 80.00th=[ 351], 90.00th=[ 489], 95.00th=[ 542], 00:26:21.100 | 99.00th=[ 634], 99.50th=[ 642], 99.90th=[ 701], 99.95th=[ 726], 00:26:21.100 | 99.99th=[ 726] 00:26:21.100 bw ( KiB/s): min=28672, max=311696, per=8.40%, avg=90746.40, stdev=79572.50, samples=20 00:26:21.100 iops : min= 112, max= 1217, avg=354.45, stdev=310.75, samples=20 00:26:21.100 lat (usec) : 750=0.03%, 1000=0.11% 00:26:21.100 lat (msec) : 2=0.78%, 4=1.80%, 10=3.71%, 20=2.60%, 50=18.59% 00:26:21.100 lat (msec) : 100=20.03%, 250=29.12%, 500=14.96%, 750=8.26% 00:26:21.100 cpu : usr=0.70%, sys=1.24%, ctx=2366, majf=0, minf=1 00:26:21.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:26:21.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.100 issued rwts: total=0,3609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.100 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.100 job3: (groupid=0, jobs=1): err= 0: pid=1059661: Mon Dec 16 02:47:51 2024 00:26:21.100 write: IOPS=284, BW=71.1MiB/s (74.5MB/s)(727MiB/10234msec); 0 zone resets 00:26:21.100 slat (usec): min=19, max=142851, avg=3028.57, stdev=8995.45 00:26:21.100 clat (msec): min=2, max=797, avg=221.99, stdev=170.75 00:26:21.100 lat (msec): min=2, max=797, avg=225.01, stdev=173.06 00:26:21.100 clat percentiles (msec): 00:26:21.100 | 1.00th=[ 10], 5.00th=[ 40], 10.00th=[ 42], 20.00th=[ 72], 00:26:21.100 | 30.00th=[ 88], 40.00th=[ 132], 50.00th=[ 171], 60.00th=[ 192], 00:26:21.100 | 70.00th=[ 326], 80.00th=[ 401], 90.00th=[ 498], 95.00th=[ 542], 00:26:21.100 | 99.00th=[ 575], 99.50th=[ 584], 99.90th=[ 768], 99.95th=[ 802], 00:26:21.100 | 99.99th=[ 802] 00:26:21.100 bw ( KiB/s): min=26624, max=279552, per=6.74%, avg=72836.40, stdev=61186.43, samples=20 00:26:21.100 iops : min= 104, max= 1092, avg=284.45, stdev=239.00, samples=20 00:26:21.100 lat (msec) : 4=0.17%, 10=1.10%, 20=0.93%, 50=12.86%, 100=19.18% 00:26:21.100 lat (msec) : 250=29.91%, 500=26.33%, 750=9.32%, 1000=0.21% 00:26:21.100 cpu : usr=0.61%, sys=0.90%, ctx=1024, majf=0, minf=1 00:26:21.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:21.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.100 issued rwts: total=0,2909,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.100 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.100 job4: (groupid=0, jobs=1): err= 0: pid=1059662: Mon Dec 16 02:47:51 2024 00:26:21.100 write: IOPS=187, BW=46.9MiB/s (49.2MB/s)(480MiB/10235msec); 0 zone resets 00:26:21.100 slat (usec): min=24, max=108333, avg=4640.28, stdev=10411.12 00:26:21.100 clat (usec): min=746, max=687498, avg=336228.26, stdev=151922.69 00:26:21.100 lat (usec): min=780, max=687552, avg=340868.54, stdev=154020.80 00:26:21.100 clat percentiles (msec): 00:26:21.100 | 1.00th=[ 36], 5.00th=[ 89], 10.00th=[ 106], 20.00th=[ 144], 00:26:21.100 | 30.00th=[ 241], 40.00th=[ 342], 50.00th=[ 388], 60.00th=[ 426], 00:26:21.100 | 70.00th=[ 443], 80.00th=[ 468], 90.00th=[ 493], 95.00th=[ 518], 00:26:21.100 | 99.00th=[ 575], 99.50th=[ 625], 99.90th=[ 684], 99.95th=[ 684], 00:26:21.100 | 99.99th=[ 684] 00:26:21.100 bw ( KiB/s): min=28672, max=143360, per=4.40%, avg=47511.00, stdev=27161.54, samples=20 00:26:21.100 iops : min= 112, max= 560, avg=185.55, stdev=106.09, samples=20 00:26:21.100 lat (usec) : 750=0.05%, 1000=0.26% 00:26:21.100 lat (msec) : 20=0.26%, 50=1.46%, 100=6.51%, 250=21.77%, 500=61.09% 00:26:21.100 lat (msec) : 750=8.59% 00:26:21.100 cpu : usr=0.54%, sys=0.62%, ctx=714, majf=0, minf=1 00:26:21.100 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:21.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.100 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.100 issued rwts: total=0,1920,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.100 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.100 job5: (groupid=0, jobs=1): err= 0: pid=1059663: Mon Dec 16 02:47:51 2024 00:26:21.100 write: IOPS=528, BW=132MiB/s (139MB/s)(1333MiB/10086msec); 0 zone resets 00:26:21.100 slat (usec): min=28, max=52559, avg=1793.76, stdev=3753.38 00:26:21.100 clat (usec): min=1063, max=259599, avg=119044.63, stdev=49398.42 00:26:21.100 lat (usec): min=1139, max=259648, avg=120838.40, stdev=50051.61 00:26:21.100 clat percentiles (msec): 00:26:21.100 | 1.00th=[ 17], 5.00th=[ 53], 10.00th=[ 70], 20.00th=[ 86], 00:26:21.100 | 30.00th=[ 91], 40.00th=[ 94], 50.00th=[ 104], 60.00th=[ 117], 00:26:21.100 | 70.00th=[ 140], 80.00th=[ 169], 90.00th=[ 186], 95.00th=[ 215], 00:26:21.100 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 257], 99.95th=[ 259], 00:26:21.100 | 99.99th=[ 259] 00:26:21.100 bw ( KiB/s): min=69632, max=266240, per=12.48%, avg=134858.00, stdev=45194.97, samples=20 00:26:21.100 iops : min= 272, max= 1040, avg=526.70, stdev=176.54, samples=20 00:26:21.100 lat (msec) : 2=0.09%, 4=0.17%, 10=0.34%, 20=0.62%, 50=3.30% 00:26:21.100 lat (msec) : 100=42.64%, 250=52.62%, 500=0.23% 00:26:21.100 cpu : usr=1.36%, sys=1.71%, ctx=1601, majf=0, minf=1 00:26:21.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:26:21.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.100 issued rwts: total=0,5331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.100 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.100 job6: (groupid=0, jobs=1): err= 0: pid=1059664: Mon Dec 16 02:47:51 2024 00:26:21.100 write: IOPS=189, BW=47.3MiB/s (49.6MB/s)(484MiB/10236msec); 0 zone resets 00:26:21.100 slat (usec): min=27, max=99105, avg=5060.13, stdev=11051.70 00:26:21.100 clat (msec): min=9, max=696, avg=332.95, stdev=169.14 00:26:21.100 lat (msec): min=9, max=696, avg=338.01, stdev=171.47 00:26:21.100 clat percentiles (msec): 00:26:21.100 | 1.00th=[ 48], 5.00th=[ 81], 10.00th=[ 91], 20.00th=[ 122], 00:26:21.100 | 30.00th=[ 203], 40.00th=[ 326], 50.00th=[ 372], 60.00th=[ 430], 00:26:21.100 | 70.00th=[ 468], 80.00th=[ 498], 90.00th=[ 527], 95.00th=[ 558], 00:26:21.100 | 99.00th=[ 600], 99.50th=[ 642], 99.90th=[ 701], 99.95th=[ 701], 00:26:21.100 | 99.99th=[ 701] 00:26:21.100 bw ( KiB/s): min=24576, max=144606, per=4.44%, avg=47972.65, stdev=31600.03, samples=20 00:26:21.100 iops : min= 96, max= 564, avg=187.30, stdev=123.17, samples=20 00:26:21.100 lat (msec) : 10=0.05%, 20=0.67%, 50=0.41%, 100=11.72%, 250=22.77% 00:26:21.100 lat (msec) : 500=47.08%, 750=17.29% 00:26:21.100 cpu : usr=0.50%, sys=0.64%, ctx=531, majf=0, minf=1 00:26:21.100 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:21.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.100 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.100 issued rwts: total=0,1937,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.100 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.100 job7: (groupid=0, jobs=1): err= 0: pid=1059665: Mon Dec 16 02:47:51 2024 00:26:21.100 write: IOPS=667, BW=167MiB/s (175MB/s)(1709MiB/10234msec); 0 zone resets 00:26:21.100 slat (usec): min=28, max=155059, avg=1235.87, stdev=4991.61 00:26:21.100 clat (usec): min=857, max=805002, avg=94538.68, stdev=110832.89 00:26:21.100 lat (usec): min=1163, max=805087, avg=95774.55, stdev=112189.64 00:26:21.100 clat percentiles (msec): 00:26:21.100 | 1.00th=[ 22], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 41], 00:26:21.100 | 30.00th=[ 43], 40.00th=[ 43], 50.00th=[ 44], 60.00th=[ 50], 00:26:21.100 | 70.00th=[ 93], 80.00th=[ 118], 90.00th=[ 188], 95.00th=[ 388], 00:26:21.100 | 99.00th=[ 567], 99.50th=[ 592], 99.90th=[ 743], 99.95th=[ 776], 00:26:21.100 | 99.99th=[ 802] 00:26:21.100 bw ( KiB/s): min=26624, max=390144, per=16.04%, avg=173337.90, stdev=130394.39, samples=20 00:26:21.100 iops : min= 104, max= 1524, avg=677.05, stdev=509.41, samples=20 00:26:21.100 lat (usec) : 1000=0.01% 00:26:21.100 lat (msec) : 2=0.07%, 4=0.29%, 10=0.16%, 20=0.29%, 50=60.73% 00:26:21.100 lat (msec) : 100=9.55%, 250=21.16%, 500=5.90%, 750=1.74%, 1000=0.09% 00:26:21.100 cpu : usr=1.47%, sys=1.72%, ctx=2497, majf=0, minf=1 00:26:21.100 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:26:21.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.100 issued rwts: total=0,6835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.100 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.100 job8: (groupid=0, jobs=1): err= 0: pid=1059666: Mon Dec 16 02:47:51 2024 00:26:21.100 write: IOPS=640, BW=160MiB/s (168MB/s)(1616MiB/10086msec); 0 zone resets 00:26:21.100 slat (usec): min=18, max=165361, avg=1346.16, stdev=4201.14 00:26:21.100 clat (usec): min=959, max=469449, avg=98485.87, stdev=75572.52 00:26:21.100 lat (usec): min=1019, max=487223, avg=99832.03, stdev=76515.05 00:26:21.100 clat percentiles (msec): 00:26:21.100 | 1.00th=[ 5], 5.00th=[ 26], 10.00th=[ 35], 20.00th=[ 38], 00:26:21.100 | 30.00th=[ 41], 40.00th=[ 68], 50.00th=[ 90], 60.00th=[ 94], 00:26:21.100 | 70.00th=[ 113], 80.00th=[ 155], 90.00th=[ 184], 95.00th=[ 245], 00:26:21.100 | 99.00th=[ 401], 99.50th=[ 430], 99.90th=[ 468], 99.95th=[ 468], 00:26:21.100 | 99.99th=[ 468] 00:26:21.100 bw ( KiB/s): min=40878, max=424960, per=15.16%, avg=163819.85, stdev=104237.68, samples=20 00:26:21.100 iops : min= 159, max= 1660, avg=639.85, stdev=407.25, samples=20 00:26:21.100 lat (usec) : 1000=0.02% 00:26:21.100 lat (msec) : 2=0.15%, 4=0.73%, 10=1.62%, 20=1.76%, 50=33.43% 00:26:21.100 lat (msec) : 100=27.07%, 250=30.35%, 500=4.87% 00:26:21.101 cpu : usr=1.44%, sys=1.56%, ctx=2369, majf=0, minf=1 00:26:21.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:21.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.101 issued rwts: total=0,6462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.101 job9: (groupid=0, jobs=1): err= 0: pid=1059667: Mon Dec 16 02:47:51 2024 00:26:21.101 write: IOPS=441, BW=110MiB/s (116MB/s)(1126MiB/10205msec); 0 zone resets 00:26:21.101 slat (usec): min=25, max=91475, avg=1325.96, stdev=5018.52 00:26:21.101 clat (usec): min=759, max=517913, avg=143576.26, stdev=131704.84 00:26:21.101 lat (usec): min=799, max=517970, avg=144902.22, stdev=133200.26 00:26:21.101 clat percentiles (msec): 00:26:21.101 | 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 12], 20.00th=[ 33], 00:26:21.101 | 30.00th=[ 56], 40.00th=[ 90], 50.00th=[ 109], 60.00th=[ 120], 00:26:21.101 | 70.00th=[ 167], 80.00th=[ 228], 90.00th=[ 376], 95.00th=[ 456], 00:26:21.101 | 99.00th=[ 489], 99.50th=[ 510], 99.90th=[ 514], 99.95th=[ 518], 00:26:21.101 | 99.99th=[ 518] 00:26:21.101 bw ( KiB/s): min=32768, max=202240, per=10.52%, avg=113716.85, stdev=56997.56, samples=20 00:26:21.101 iops : min= 128, max= 790, avg=444.15, stdev=222.71, samples=20 00:26:21.101 lat (usec) : 1000=0.07% 00:26:21.101 lat (msec) : 2=0.49%, 4=2.80%, 10=5.77%, 20=5.02%, 50=13.14% 00:26:21.101 lat (msec) : 100=16.74%, 250=38.89%, 500=16.47%, 750=0.62% 00:26:21.101 cpu : usr=0.98%, sys=1.41%, ctx=3184, majf=0, minf=1 00:26:21.101 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:21.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.101 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.101 issued rwts: total=0,4505,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.101 job10: (groupid=0, jobs=1): err= 0: pid=1059668: Mon Dec 16 02:47:51 2024 00:26:21.101 write: IOPS=195, BW=48.8MiB/s (51.1MB/s)(499MiB/10235msec); 0 zone resets 00:26:21.101 slat (usec): min=27, max=116155, avg=4663.26, stdev=10643.48 00:26:21.101 clat (usec): min=1054, max=707719, avg=323318.45, stdev=167441.83 00:26:21.101 lat (usec): min=1136, max=707772, avg=327981.70, stdev=169918.62 00:26:21.101 clat percentiles (msec): 00:26:21.101 | 1.00th=[ 4], 5.00th=[ 14], 10.00th=[ 32], 20.00th=[ 138], 00:26:21.101 | 30.00th=[ 245], 40.00th=[ 338], 50.00th=[ 376], 60.00th=[ 422], 00:26:21.101 | 70.00th=[ 443], 80.00th=[ 464], 90.00th=[ 498], 95.00th=[ 518], 00:26:21.101 | 99.00th=[ 567], 99.50th=[ 651], 99.90th=[ 709], 99.95th=[ 709], 00:26:21.101 | 99.99th=[ 709] 00:26:21.101 bw ( KiB/s): min=28672, max=226816, per=4.58%, avg=49461.00, stdev=42451.99, samples=20 00:26:21.101 iops : min= 112, max= 886, avg=193.15, stdev=165.83, samples=20 00:26:21.101 lat (msec) : 2=0.30%, 4=0.75%, 10=1.50%, 20=3.26%, 50=6.71% 00:26:21.101 lat (msec) : 100=4.26%, 250=13.58%, 500=60.87%, 750=8.77% 00:26:21.101 cpu : usr=0.53%, sys=0.64%, ctx=851, majf=0, minf=1 00:26:21.101 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:26:21.101 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.101 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:21.101 issued rwts: total=0,1996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.101 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:21.101 00:26:21.101 Run status group 0 (all jobs): 00:26:21.101 WRITE: bw=1055MiB/s (1106MB/s), 46.9MiB/s-167MiB/s (49.2MB/s-175MB/s), io=10.5GiB (11.3GB), run=10047-10236msec 00:26:21.101 00:26:21.101 Disk stats (read/write): 00:26:21.101 nvme0n1: ios=45/9708, merge=0/0, ticks=1109/1213592, in_queue=1214701, util=99.85% 00:26:21.101 nvme10n1: ios=39/5398, merge=0/0, ticks=2015/1221549, in_queue=1223564, util=100.00% 00:26:21.101 nvme1n1: ios=19/7173, merge=0/0, ticks=183/1242368, in_queue=1242551, util=97.87% 00:26:21.101 nvme2n1: ios=51/5770, merge=0/0, ticks=2856/1215186, in_queue=1218042, util=100.00% 00:26:21.101 nvme3n1: ios=48/3790, merge=0/0, ticks=854/1230142, in_queue=1230996, util=100.00% 00:26:21.101 nvme4n1: ios=51/10378, merge=0/0, ticks=1224/1203031, in_queue=1204255, util=100.00% 00:26:21.101 nvme5n1: ios=0/3821, merge=0/0, ticks=0/1224086, in_queue=1224086, util=98.29% 00:26:21.101 nvme6n1: ios=13/13621, merge=0/0, ticks=339/1233827, in_queue=1234166, util=98.65% 00:26:21.101 nvme7n1: ios=44/12714, merge=0/0, ticks=2512/1197859, in_queue=1200371, util=100.00% 00:26:21.101 nvme8n1: ios=0/8984, merge=0/0, ticks=0/1248411, in_queue=1248411, util=98.95% 00:26:21.101 nvme9n1: ios=0/3942, merge=0/0, ticks=0/1229459, in_queue=1229459, util=99.07% 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:21.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:21.101 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.101 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:21.360 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:21.360 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:21.360 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:21.360 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:21.360 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:21.360 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:21.360 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:21.360 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:21.360 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:21.360 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.360 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.360 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.360 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.360 02:47:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:21.619 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:21.619 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:21.619 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:21.619 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:21.619 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:21.619 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:21.619 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:21.619 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:21.619 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:21.619 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.619 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.619 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.619 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.619 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:21.878 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:21.878 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:21.878 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:21.878 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:21.878 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:21.878 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:21.878 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:21.878 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:21.878 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:21.878 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.878 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:21.878 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.878 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.878 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:22.136 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:22.136 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:22.136 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.136 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.136 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:22.136 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:22.136 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:22.136 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:22.136 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:22.136 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.136 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.136 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.136 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.137 02:47:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:22.395 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:22.395 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:22.395 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.395 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.395 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:22.395 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:22.395 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:22.654 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.654 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:22.913 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:22.913 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:22.913 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.913 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.913 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:22.913 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:22.914 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:22.914 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:22.914 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:22.914 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.914 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:22.914 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.914 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:22.914 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:22.914 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:22.914 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:22.914 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:22.914 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:22.914 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:23.173 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:23.173 rmmod nvme_tcp 00:26:23.173 rmmod nvme_fabrics 00:26:23.173 rmmod nvme_keyring 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 1052072 ']' 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 1052072 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 1052072 ']' 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 1052072 00:26:23.173 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:23.432 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:23.432 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1052072 00:26:23.432 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:23.432 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:23.432 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1052072' 00:26:23.432 killing process with pid 1052072 00:26:23.432 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 1052072 00:26:23.432 02:47:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 1052072 00:26:23.690 02:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:23.690 02:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:23.690 02:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:23.690 02:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:23.691 02:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:23.691 02:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:23.691 02:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:23.691 02:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:23.691 02:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:23.691 02:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.691 02:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.691 02:47:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.226 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:26.226 00:26:26.226 real 1m10.815s 00:26:26.226 user 4m15.686s 00:26:26.226 sys 0m17.389s 00:26:26.226 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.226 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:26.226 ************************************ 00:26:26.226 END TEST nvmf_multiconnection 00:26:26.226 ************************************ 00:26:26.226 02:47:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:26.226 02:47:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:26.226 02:47:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:26.226 02:47:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:26.226 ************************************ 00:26:26.226 START TEST nvmf_initiator_timeout 00:26:26.226 ************************************ 00:26:26.226 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:26.226 * Looking for test storage... 00:26:26.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:26.226 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:26.226 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:26.226 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:26.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.227 --rc genhtml_branch_coverage=1 00:26:26.227 --rc genhtml_function_coverage=1 00:26:26.227 --rc genhtml_legend=1 00:26:26.227 --rc geninfo_all_blocks=1 00:26:26.227 --rc geninfo_unexecuted_blocks=1 00:26:26.227 00:26:26.227 ' 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:26.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.227 --rc genhtml_branch_coverage=1 00:26:26.227 --rc genhtml_function_coverage=1 00:26:26.227 --rc genhtml_legend=1 00:26:26.227 --rc geninfo_all_blocks=1 00:26:26.227 --rc geninfo_unexecuted_blocks=1 00:26:26.227 00:26:26.227 ' 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:26.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.227 --rc genhtml_branch_coverage=1 00:26:26.227 --rc genhtml_function_coverage=1 00:26:26.227 --rc genhtml_legend=1 00:26:26.227 --rc geninfo_all_blocks=1 00:26:26.227 --rc geninfo_unexecuted_blocks=1 00:26:26.227 00:26:26.227 ' 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:26.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:26.227 --rc genhtml_branch_coverage=1 00:26:26.227 --rc genhtml_function_coverage=1 00:26:26.227 --rc genhtml_legend=1 00:26:26.227 --rc geninfo_all_blocks=1 00:26:26.227 --rc geninfo_unexecuted_blocks=1 00:26:26.227 00:26:26.227 ' 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:26.227 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:26.227 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:26.228 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:26.228 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:26.228 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:26.228 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.228 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.228 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.228 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:26.228 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:26.228 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:26.228 02:47:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:32.797 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:32.797 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.797 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:32.798 Found net devices under 0000:af:00.0: cvl_0_0 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:32.798 Found net devices under 0000:af:00.1: cvl_0_1 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:32.798 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:32.798 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:26:32.798 00:26:32.798 --- 10.0.0.2 ping statistics --- 00:26:32.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.798 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:32.798 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:32.798 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:26:32.798 00:26:32.798 --- 10.0.0.1 ping statistics --- 00:26:32.798 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:32.798 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=1064898 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 1064898 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 1064898 ']' 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:32.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.798 [2024-12-16 02:48:02.608280] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:26:32.798 [2024-12-16 02:48:02.608324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:32.798 [2024-12-16 02:48:02.686539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:32.798 [2024-12-16 02:48:02.709388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:32.798 [2024-12-16 02:48:02.709425] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:32.798 [2024-12-16 02:48:02.709433] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:32.798 [2024-12-16 02:48:02.709439] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:32.798 [2024-12-16 02:48:02.709444] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:32.798 [2024-12-16 02:48:02.710738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.798 [2024-12-16 02:48:02.710862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:32.798 [2024-12-16 02:48:02.710958] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:32.798 [2024-12-16 02:48:02.710958] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.798 Malloc0 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.798 Delay0 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:32.798 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.799 [2024-12-16 02:48:02.889696] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:32.799 [2024-12-16 02:48:02.922946] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.799 02:48:02 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:33.367 02:48:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:33.367 02:48:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:33.367 02:48:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:33.367 02:48:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:33.367 02:48:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:35.899 02:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:35.899 02:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:35.899 02:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:35.899 02:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:35.899 02:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:35.899 02:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:35.899 02:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1065581 00:26:35.899 02:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:35.899 02:48:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:35.899 [global] 00:26:35.899 thread=1 00:26:35.899 invalidate=1 00:26:35.899 rw=write 00:26:35.899 time_based=1 00:26:35.899 runtime=60 00:26:35.899 ioengine=libaio 00:26:35.899 direct=1 00:26:35.899 bs=4096 00:26:35.899 iodepth=1 00:26:35.899 norandommap=0 00:26:35.899 numjobs=1 00:26:35.899 00:26:35.899 verify_dump=1 00:26:35.899 verify_backlog=512 00:26:35.899 verify_state_save=0 00:26:35.899 do_verify=1 00:26:35.899 verify=crc32c-intel 00:26:35.899 [job0] 00:26:35.899 filename=/dev/nvme0n1 00:26:35.899 Could not set queue depth (nvme0n1) 00:26:35.899 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:35.899 fio-3.35 00:26:35.899 Starting 1 thread 00:26:38.431 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:38.431 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.431 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.431 true 00:26:38.431 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.431 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:38.431 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.431 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.431 true 00:26:38.431 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.431 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:38.431 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.431 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.431 true 00:26:38.432 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.432 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:38.432 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.432 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.690 true 00:26:38.690 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.690 02:48:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.977 true 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.977 true 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.977 true 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.977 true 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:41.977 02:48:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1065581 00:27:38.213 00:27:38.213 job0: (groupid=0, jobs=1): err= 0: pid=1065707: Mon Dec 16 02:49:06 2024 00:27:38.213 read: IOPS=15, BW=62.8KiB/s (64.3kB/s)(3768KiB/60029msec) 00:27:38.213 slat (usec): min=6, max=6739, avg=22.22, stdev=219.25 00:27:38.213 clat (usec): min=195, max=41515k, avg=63449.29, stdev=1352156.12 00:27:38.213 lat (usec): min=202, max=41515k, avg=63471.51, stdev=1352156.39 00:27:38.213 clat percentiles (usec): 00:27:38.213 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 223], 00:27:38.213 | 20.00th=[ 229], 30.00th=[ 235], 40.00th=[ 243], 00:27:38.213 | 50.00th=[ 293], 60.00th=[ 41157], 70.00th=[ 41157], 00:27:38.213 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:27:38.213 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[17112761], 00:27:38.213 | 99.95th=[17112761], 99.99th=[17112761] 00:27:38.213 write: IOPS=17, BW=68.2KiB/s (69.9kB/s)(4096KiB/60029msec); 0 zone resets 00:27:38.213 slat (usec): min=10, max=27778, avg=38.79, stdev=867.71 00:27:38.213 clat (usec): min=153, max=394, avg=185.26, stdev=16.85 00:27:38.213 lat (usec): min=165, max=28024, avg=224.04, stdev=869.81 00:27:38.213 clat percentiles (usec): 00:27:38.213 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:27:38.213 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:27:38.213 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 210], 00:27:38.213 | 99.00th=[ 235], 99.50th=[ 255], 99.90th=[ 306], 99.95th=[ 396], 00:27:38.213 | 99.99th=[ 396] 00:27:38.213 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:27:38.213 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:27:38.213 lat (usec) : 250=72.74%, 500=4.63%, 750=0.10% 00:27:38.213 lat (msec) : 50=22.48%, >=2000=0.05% 00:27:38.213 cpu : usr=0.03%, sys=0.06%, ctx=1970, majf=0, minf=1 00:27:38.213 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:38.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:38.213 issued rwts: total=942,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:38.213 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:38.213 00:27:38.213 Run status group 0 (all jobs): 00:27:38.213 READ: bw=62.8KiB/s (64.3kB/s), 62.8KiB/s-62.8KiB/s (64.3kB/s-64.3kB/s), io=3768KiB (3858kB), run=60029-60029msec 00:27:38.213 WRITE: bw=68.2KiB/s (69.9kB/s), 68.2KiB/s-68.2KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60029-60029msec 00:27:38.213 00:27:38.213 Disk stats (read/write): 00:27:38.213 nvme0n1: ios=991/1024, merge=0/0, ticks=19118/189, in_queue=19307, util=99.69% 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:38.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:38.213 nvmf hotplug test: fio successful as expected 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:38.213 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:38.214 rmmod nvme_tcp 00:27:38.214 rmmod nvme_fabrics 00:27:38.214 rmmod nvme_keyring 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 1064898 ']' 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 1064898 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 1064898 ']' 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 1064898 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1064898 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1064898' 00:27:38.214 killing process with pid 1064898 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 1064898 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 1064898 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:38.214 02:49:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:38.473 02:49:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:38.473 00:27:38.473 real 1m12.626s 00:27:38.473 user 4m22.477s 00:27:38.473 sys 0m6.387s 00:27:38.473 02:49:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:38.473 02:49:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:38.473 ************************************ 00:27:38.473 END TEST nvmf_initiator_timeout 00:27:38.473 ************************************ 00:27:38.473 02:49:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:38.473 02:49:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:38.473 02:49:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:38.473 02:49:09 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:38.473 02:49:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.039 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:45.040 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:45.040 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:45.040 Found net devices under 0000:af:00.0: cvl_0_0 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:45.040 Found net devices under 0000:af:00.1: cvl_0_1 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:45.040 ************************************ 00:27:45.040 START TEST nvmf_perf_adq 00:27:45.040 ************************************ 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:45.040 * Looking for test storage... 00:27:45.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:45.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.040 --rc genhtml_branch_coverage=1 00:27:45.040 --rc genhtml_function_coverage=1 00:27:45.040 --rc genhtml_legend=1 00:27:45.040 --rc geninfo_all_blocks=1 00:27:45.040 --rc geninfo_unexecuted_blocks=1 00:27:45.040 00:27:45.040 ' 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:45.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.040 --rc genhtml_branch_coverage=1 00:27:45.040 --rc genhtml_function_coverage=1 00:27:45.040 --rc genhtml_legend=1 00:27:45.040 --rc geninfo_all_blocks=1 00:27:45.040 --rc geninfo_unexecuted_blocks=1 00:27:45.040 00:27:45.040 ' 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:45.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.040 --rc genhtml_branch_coverage=1 00:27:45.040 --rc genhtml_function_coverage=1 00:27:45.040 --rc genhtml_legend=1 00:27:45.040 --rc geninfo_all_blocks=1 00:27:45.040 --rc geninfo_unexecuted_blocks=1 00:27:45.040 00:27:45.040 ' 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:45.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.040 --rc genhtml_branch_coverage=1 00:27:45.040 --rc genhtml_function_coverage=1 00:27:45.040 --rc genhtml_legend=1 00:27:45.040 --rc geninfo_all_blocks=1 00:27:45.040 --rc geninfo_unexecuted_blocks=1 00:27:45.040 00:27:45.040 ' 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:45.040 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:45.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:45.041 02:49:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:50.317 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:50.317 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:50.317 Found net devices under 0000:af:00.0: cvl_0_0 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:50.317 Found net devices under 0000:af:00.1: cvl_0_1 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:50.317 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:50.318 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:50.318 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:50.318 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:50.318 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:50.318 02:49:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:51.254 02:49:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:53.795 02:49:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:59.070 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:59.070 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:59.070 Found net devices under 0000:af:00.0: cvl_0_0 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.070 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:59.071 Found net devices under 0000:af:00.1: cvl_0_1 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:59.071 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.071 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.613 ms 00:27:59.071 00:27:59.071 --- 10.0.0.2 ping statistics --- 00:27:59.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.071 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:59.071 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.071 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:27:59.071 00:27:59.071 --- 10.0.0.1 ping statistics --- 00:27:59.071 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.071 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1083599 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1083599 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1083599 ']' 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:59.071 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.071 [2024-12-16 02:49:29.588922] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:27:59.071 [2024-12-16 02:49:29.588967] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.071 [2024-12-16 02:49:29.665692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:59.071 [2024-12-16 02:49:29.688158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.071 [2024-12-16 02:49:29.688196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.071 [2024-12-16 02:49:29.688203] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.071 [2024-12-16 02:49:29.688208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.071 [2024-12-16 02:49:29.688213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.071 [2024-12-16 02:49:29.689472] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.071 [2024-12-16 02:49:29.689580] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.071 [2024-12-16 02:49:29.689664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:59.071 [2024-12-16 02:49:29.689665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.330 [2024-12-16 02:49:29.921632] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.330 Malloc1 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.330 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.330 [2024-12-16 02:49:29.985165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:59.588 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.588 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1083781 00:27:59.588 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:59.588 02:49:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:01.491 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:01.491 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:01.491 02:49:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:01.491 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.491 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:01.491 "tick_rate": 2100000000, 00:28:01.491 "poll_groups": [ 00:28:01.491 { 00:28:01.491 "name": "nvmf_tgt_poll_group_000", 00:28:01.491 "admin_qpairs": 1, 00:28:01.491 "io_qpairs": 1, 00:28:01.491 "current_admin_qpairs": 1, 00:28:01.491 "current_io_qpairs": 1, 00:28:01.491 "pending_bdev_io": 0, 00:28:01.491 "completed_nvme_io": 19884, 00:28:01.491 "transports": [ 00:28:01.491 { 00:28:01.491 "trtype": "TCP" 00:28:01.491 } 00:28:01.491 ] 00:28:01.491 }, 00:28:01.491 { 00:28:01.491 "name": "nvmf_tgt_poll_group_001", 00:28:01.491 "admin_qpairs": 0, 00:28:01.491 "io_qpairs": 1, 00:28:01.492 "current_admin_qpairs": 0, 00:28:01.492 "current_io_qpairs": 1, 00:28:01.492 "pending_bdev_io": 0, 00:28:01.492 "completed_nvme_io": 19984, 00:28:01.492 "transports": [ 00:28:01.492 { 00:28:01.492 "trtype": "TCP" 00:28:01.492 } 00:28:01.492 ] 00:28:01.492 }, 00:28:01.492 { 00:28:01.492 "name": "nvmf_tgt_poll_group_002", 00:28:01.492 "admin_qpairs": 0, 00:28:01.492 "io_qpairs": 1, 00:28:01.492 "current_admin_qpairs": 0, 00:28:01.492 "current_io_qpairs": 1, 00:28:01.492 "pending_bdev_io": 0, 00:28:01.492 "completed_nvme_io": 20169, 00:28:01.492 "transports": [ 00:28:01.492 { 00:28:01.492 "trtype": "TCP" 00:28:01.492 } 00:28:01.492 ] 00:28:01.492 }, 00:28:01.492 { 00:28:01.492 "name": "nvmf_tgt_poll_group_003", 00:28:01.492 "admin_qpairs": 0, 00:28:01.492 "io_qpairs": 1, 00:28:01.492 "current_admin_qpairs": 0, 00:28:01.492 "current_io_qpairs": 1, 00:28:01.492 "pending_bdev_io": 0, 00:28:01.492 "completed_nvme_io": 20006, 00:28:01.492 "transports": [ 00:28:01.492 { 00:28:01.492 "trtype": "TCP" 00:28:01.492 } 00:28:01.492 ] 00:28:01.492 } 00:28:01.492 ] 00:28:01.492 }' 00:28:01.492 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:01.492 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:01.492 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:01.492 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:01.492 02:49:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1083781 00:28:09.610 Initializing NVMe Controllers 00:28:09.610 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:09.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:09.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:09.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:09.610 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:09.610 Initialization complete. Launching workers. 00:28:09.610 ======================================================== 00:28:09.610 Latency(us) 00:28:09.610 Device Information : IOPS MiB/s Average min max 00:28:09.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10636.50 41.55 6017.19 2004.25 10312.65 00:28:09.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10703.50 41.81 5980.26 1963.31 10646.76 00:28:09.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10784.30 42.13 5934.40 1496.47 13555.41 00:28:09.610 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10650.30 41.60 6008.82 2339.05 10472.68 00:28:09.610 ======================================================== 00:28:09.610 Total : 42774.59 167.09 5984.99 1496.47 13555.41 00:28:09.610 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:09.610 rmmod nvme_tcp 00:28:09.610 rmmod nvme_fabrics 00:28:09.610 rmmod nvme_keyring 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1083599 ']' 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1083599 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1083599 ']' 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1083599 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:09.610 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1083599 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1083599' 00:28:09.869 killing process with pid 1083599 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1083599 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1083599 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.869 02:49:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.404 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:12.404 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:12.404 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:12.404 02:49:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:13.342 02:49:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:15.877 02:49:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.150 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:21.151 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:21.151 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:21.151 Found net devices under 0000:af:00.0: cvl_0_0 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:21.151 Found net devices under 0000:af:00.1: cvl_0_1 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:21.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.752 ms 00:28:21.151 00:28:21.151 --- 10.0.0.2 ping statistics --- 00:28:21.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.151 rtt min/avg/max/mdev = 0.752/0.752/0.752/0.000 ms 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:21.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:28:21.151 00:28:21.151 --- 10.0.0.1 ping statistics --- 00:28:21.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.151 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:21.151 net.core.busy_poll = 1 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:21.151 net.core.busy_read = 1 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:21.151 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1087637 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1087637 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1087637 ']' 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.152 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.152 [2024-12-16 02:49:51.759195] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:21.152 [2024-12-16 02:49:51.759240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:21.411 [2024-12-16 02:49:51.837406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:21.411 [2024-12-16 02:49:51.860618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:21.411 [2024-12-16 02:49:51.860653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:21.411 [2024-12-16 02:49:51.860660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:21.411 [2024-12-16 02:49:51.860667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:21.411 [2024-12-16 02:49:51.860673] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:21.411 [2024-12-16 02:49:51.862104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:21.411 [2024-12-16 02:49:51.862130] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:21.411 [2024-12-16 02:49:51.862218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.411 [2024-12-16 02:49:51.862219] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.411 02:49:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.670 [2024-12-16 02:49:52.079329] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.670 Malloc1 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:21.670 [2024-12-16 02:49:52.139585] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1087666 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:21.670 02:49:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:23.571 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:23.571 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.571 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:23.571 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.571 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:23.571 "tick_rate": 2100000000, 00:28:23.571 "poll_groups": [ 00:28:23.571 { 00:28:23.571 "name": "nvmf_tgt_poll_group_000", 00:28:23.571 "admin_qpairs": 1, 00:28:23.571 "io_qpairs": 1, 00:28:23.571 "current_admin_qpairs": 1, 00:28:23.571 "current_io_qpairs": 1, 00:28:23.571 "pending_bdev_io": 0, 00:28:23.571 "completed_nvme_io": 27117, 00:28:23.571 "transports": [ 00:28:23.571 { 00:28:23.571 "trtype": "TCP" 00:28:23.571 } 00:28:23.571 ] 00:28:23.571 }, 00:28:23.571 { 00:28:23.571 "name": "nvmf_tgt_poll_group_001", 00:28:23.571 "admin_qpairs": 0, 00:28:23.571 "io_qpairs": 3, 00:28:23.571 "current_admin_qpairs": 0, 00:28:23.571 "current_io_qpairs": 3, 00:28:23.571 "pending_bdev_io": 0, 00:28:23.571 "completed_nvme_io": 28838, 00:28:23.571 "transports": [ 00:28:23.571 { 00:28:23.571 "trtype": "TCP" 00:28:23.571 } 00:28:23.571 ] 00:28:23.571 }, 00:28:23.571 { 00:28:23.571 "name": "nvmf_tgt_poll_group_002", 00:28:23.571 "admin_qpairs": 0, 00:28:23.571 "io_qpairs": 0, 00:28:23.571 "current_admin_qpairs": 0, 00:28:23.571 "current_io_qpairs": 0, 00:28:23.571 "pending_bdev_io": 0, 00:28:23.571 "completed_nvme_io": 0, 00:28:23.571 "transports": [ 00:28:23.571 { 00:28:23.571 "trtype": "TCP" 00:28:23.571 } 00:28:23.571 ] 00:28:23.571 }, 00:28:23.571 { 00:28:23.571 "name": "nvmf_tgt_poll_group_003", 00:28:23.571 "admin_qpairs": 0, 00:28:23.571 "io_qpairs": 0, 00:28:23.571 "current_admin_qpairs": 0, 00:28:23.571 "current_io_qpairs": 0, 00:28:23.571 "pending_bdev_io": 0, 00:28:23.571 "completed_nvme_io": 0, 00:28:23.571 "transports": [ 00:28:23.571 { 00:28:23.571 "trtype": "TCP" 00:28:23.571 } 00:28:23.571 ] 00:28:23.571 } 00:28:23.571 ] 00:28:23.571 }' 00:28:23.571 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:23.571 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:23.571 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:23.571 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:23.571 02:49:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1087666 00:28:31.686 Initializing NVMe Controllers 00:28:31.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:31.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:31.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:31.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:31.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:31.686 Initialization complete. Launching workers. 00:28:31.686 ======================================================== 00:28:31.686 Latency(us) 00:28:31.686 Device Information : IOPS MiB/s Average min max 00:28:31.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5557.40 21.71 11516.33 1562.58 60154.20 00:28:31.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5122.80 20.01 12531.94 1724.97 59722.69 00:28:31.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14277.30 55.77 4482.04 1170.98 46641.86 00:28:31.686 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4752.80 18.57 13466.72 1487.22 58262.87 00:28:31.686 ======================================================== 00:28:31.686 Total : 29710.30 116.06 8623.12 1170.98 60154.20 00:28:31.686 00:28:31.686 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:31.686 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:31.945 rmmod nvme_tcp 00:28:31.945 rmmod nvme_fabrics 00:28:31.945 rmmod nvme_keyring 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1087637 ']' 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1087637 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1087637 ']' 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1087637 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1087637 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1087637' 00:28:31.945 killing process with pid 1087637 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1087637 00:28:31.945 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1087637 00:28:32.204 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:32.204 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:32.204 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:32.204 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:32.204 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:32.204 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:32.204 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:32.204 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:32.204 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:32.204 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:32.204 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:32.204 02:50:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:35.493 00:28:35.493 real 0m51.047s 00:28:35.493 user 2m44.281s 00:28:35.493 sys 0m10.134s 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:35.493 ************************************ 00:28:35.493 END TEST nvmf_perf_adq 00:28:35.493 ************************************ 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:35.493 ************************************ 00:28:35.493 START TEST nvmf_shutdown 00:28:35.493 ************************************ 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:35.493 * Looking for test storage... 00:28:35.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:35.493 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:35.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.494 --rc genhtml_branch_coverage=1 00:28:35.494 --rc genhtml_function_coverage=1 00:28:35.494 --rc genhtml_legend=1 00:28:35.494 --rc geninfo_all_blocks=1 00:28:35.494 --rc geninfo_unexecuted_blocks=1 00:28:35.494 00:28:35.494 ' 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:35.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.494 --rc genhtml_branch_coverage=1 00:28:35.494 --rc genhtml_function_coverage=1 00:28:35.494 --rc genhtml_legend=1 00:28:35.494 --rc geninfo_all_blocks=1 00:28:35.494 --rc geninfo_unexecuted_blocks=1 00:28:35.494 00:28:35.494 ' 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:35.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.494 --rc genhtml_branch_coverage=1 00:28:35.494 --rc genhtml_function_coverage=1 00:28:35.494 --rc genhtml_legend=1 00:28:35.494 --rc geninfo_all_blocks=1 00:28:35.494 --rc geninfo_unexecuted_blocks=1 00:28:35.494 00:28:35.494 ' 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:35.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:35.494 --rc genhtml_branch_coverage=1 00:28:35.494 --rc genhtml_function_coverage=1 00:28:35.494 --rc genhtml_legend=1 00:28:35.494 --rc geninfo_all_blocks=1 00:28:35.494 --rc geninfo_unexecuted_blocks=1 00:28:35.494 00:28:35.494 ' 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:35.494 02:50:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:35.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:35.494 ************************************ 00:28:35.494 START TEST nvmf_shutdown_tc1 00:28:35.494 ************************************ 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:35.494 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:35.495 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:35.495 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:35.495 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:35.495 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:35.495 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:35.495 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:35.495 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:35.495 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:35.495 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:35.495 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:35.495 02:50:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:42.276 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:42.276 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:42.276 Found net devices under 0000:af:00.0: cvl_0_0 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.276 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:42.276 Found net devices under 0000:af:00.1: cvl_0_1 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:42.277 02:50:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:42.277 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.277 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:28:42.277 00:28:42.277 --- 10.0.0.2 ping statistics --- 00:28:42.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.277 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:28:42.277 00:28:42.277 --- 10.0.0.1 ping statistics --- 00:28:42.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.277 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1093009 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1093009 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1093009 ']' 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.277 [2024-12-16 02:50:12.107078] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:42.277 [2024-12-16 02:50:12.107121] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.277 [2024-12-16 02:50:12.183935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.277 [2024-12-16 02:50:12.205593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.277 [2024-12-16 02:50:12.205632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.277 [2024-12-16 02:50:12.205639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.277 [2024-12-16 02:50:12.205646] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.277 [2024-12-16 02:50:12.205652] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.277 [2024-12-16 02:50:12.207145] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:42.277 [2024-12-16 02:50:12.207252] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:42.277 [2024-12-16 02:50:12.207335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.277 [2024-12-16 02:50:12.207335] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.277 [2024-12-16 02:50:12.343118] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:42.277 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.278 Malloc1 00:28:42.278 [2024-12-16 02:50:12.448599] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:42.278 Malloc2 00:28:42.278 Malloc3 00:28:42.278 Malloc4 00:28:42.278 Malloc5 00:28:42.278 Malloc6 00:28:42.278 Malloc7 00:28:42.278 Malloc8 00:28:42.278 Malloc9 00:28:42.278 Malloc10 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1093273 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1093273 /var/tmp/bdevperf.sock 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1093273 ']' 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:42.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:42.278 { 00:28:42.278 "params": { 00:28:42.278 "name": "Nvme$subsystem", 00:28:42.278 "trtype": "$TEST_TRANSPORT", 00:28:42.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.278 "adrfam": "ipv4", 00:28:42.278 "trsvcid": "$NVMF_PORT", 00:28:42.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.278 "hdgst": ${hdgst:-false}, 00:28:42.278 "ddgst": ${ddgst:-false} 00:28:42.278 }, 00:28:42.278 "method": "bdev_nvme_attach_controller" 00:28:42.278 } 00:28:42.278 EOF 00:28:42.278 )") 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:42.278 { 00:28:42.278 "params": { 00:28:42.278 "name": "Nvme$subsystem", 00:28:42.278 "trtype": "$TEST_TRANSPORT", 00:28:42.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.278 "adrfam": "ipv4", 00:28:42.278 "trsvcid": "$NVMF_PORT", 00:28:42.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.278 "hdgst": ${hdgst:-false}, 00:28:42.278 "ddgst": ${ddgst:-false} 00:28:42.278 }, 00:28:42.278 "method": "bdev_nvme_attach_controller" 00:28:42.278 } 00:28:42.278 EOF 00:28:42.278 )") 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:42.278 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:42.278 { 00:28:42.278 "params": { 00:28:42.278 "name": "Nvme$subsystem", 00:28:42.278 "trtype": "$TEST_TRANSPORT", 00:28:42.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.278 "adrfam": "ipv4", 00:28:42.278 "trsvcid": "$NVMF_PORT", 00:28:42.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.278 "hdgst": ${hdgst:-false}, 00:28:42.278 "ddgst": ${ddgst:-false} 00:28:42.278 }, 00:28:42.278 "method": "bdev_nvme_attach_controller" 00:28:42.278 } 00:28:42.278 EOF 00:28:42.278 )") 00:28:42.538 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:42.538 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:42.538 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:42.538 { 00:28:42.538 "params": { 00:28:42.538 "name": "Nvme$subsystem", 00:28:42.538 "trtype": "$TEST_TRANSPORT", 00:28:42.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.538 "adrfam": "ipv4", 00:28:42.538 "trsvcid": "$NVMF_PORT", 00:28:42.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.538 "hdgst": ${hdgst:-false}, 00:28:42.538 "ddgst": ${ddgst:-false} 00:28:42.538 }, 00:28:42.538 "method": "bdev_nvme_attach_controller" 00:28:42.538 } 00:28:42.538 EOF 00:28:42.538 )") 00:28:42.538 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:42.538 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:42.538 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:42.538 { 00:28:42.538 "params": { 00:28:42.538 "name": "Nvme$subsystem", 00:28:42.538 "trtype": "$TEST_TRANSPORT", 00:28:42.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.538 "adrfam": "ipv4", 00:28:42.538 "trsvcid": "$NVMF_PORT", 00:28:42.538 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.538 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.538 "hdgst": ${hdgst:-false}, 00:28:42.538 "ddgst": ${ddgst:-false} 00:28:42.538 }, 00:28:42.538 "method": "bdev_nvme_attach_controller" 00:28:42.538 } 00:28:42.538 EOF 00:28:42.538 )") 00:28:42.538 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:42.538 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:42.538 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:42.538 { 00:28:42.538 "params": { 00:28:42.538 "name": "Nvme$subsystem", 00:28:42.538 "trtype": "$TEST_TRANSPORT", 00:28:42.538 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "$NVMF_PORT", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.539 "hdgst": ${hdgst:-false}, 00:28:42.539 "ddgst": ${ddgst:-false} 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 } 00:28:42.539 EOF 00:28:42.539 )") 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:42.539 { 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme$subsystem", 00:28:42.539 "trtype": "$TEST_TRANSPORT", 00:28:42.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "$NVMF_PORT", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.539 "hdgst": ${hdgst:-false}, 00:28:42.539 "ddgst": ${ddgst:-false} 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 } 00:28:42.539 EOF 00:28:42.539 )") 00:28:42.539 [2024-12-16 02:50:12.922741] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:42.539 [2024-12-16 02:50:12.922790] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:42.539 { 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme$subsystem", 00:28:42.539 "trtype": "$TEST_TRANSPORT", 00:28:42.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "$NVMF_PORT", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.539 "hdgst": ${hdgst:-false}, 00:28:42.539 "ddgst": ${ddgst:-false} 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 } 00:28:42.539 EOF 00:28:42.539 )") 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:42.539 { 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme$subsystem", 00:28:42.539 "trtype": "$TEST_TRANSPORT", 00:28:42.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "$NVMF_PORT", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.539 "hdgst": ${hdgst:-false}, 00:28:42.539 "ddgst": ${ddgst:-false} 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 } 00:28:42.539 EOF 00:28:42.539 )") 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:42.539 { 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme$subsystem", 00:28:42.539 "trtype": "$TEST_TRANSPORT", 00:28:42.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "$NVMF_PORT", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:42.539 "hdgst": ${hdgst:-false}, 00:28:42.539 "ddgst": ${ddgst:-false} 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 } 00:28:42.539 EOF 00:28:42.539 )") 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:42.539 02:50:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme1", 00:28:42.539 "trtype": "tcp", 00:28:42.539 "traddr": "10.0.0.2", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "4420", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:42.539 "hdgst": false, 00:28:42.539 "ddgst": false 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 },{ 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme2", 00:28:42.539 "trtype": "tcp", 00:28:42.539 "traddr": "10.0.0.2", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "4420", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:42.539 "hdgst": false, 00:28:42.539 "ddgst": false 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 },{ 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme3", 00:28:42.539 "trtype": "tcp", 00:28:42.539 "traddr": "10.0.0.2", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "4420", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:42.539 "hdgst": false, 00:28:42.539 "ddgst": false 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 },{ 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme4", 00:28:42.539 "trtype": "tcp", 00:28:42.539 "traddr": "10.0.0.2", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "4420", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:42.539 "hdgst": false, 00:28:42.539 "ddgst": false 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 },{ 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme5", 00:28:42.539 "trtype": "tcp", 00:28:42.539 "traddr": "10.0.0.2", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "4420", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:42.539 "hdgst": false, 00:28:42.539 "ddgst": false 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 },{ 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme6", 00:28:42.539 "trtype": "tcp", 00:28:42.539 "traddr": "10.0.0.2", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "4420", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:42.539 "hdgst": false, 00:28:42.539 "ddgst": false 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 },{ 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme7", 00:28:42.539 "trtype": "tcp", 00:28:42.539 "traddr": "10.0.0.2", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "4420", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:42.539 "hdgst": false, 00:28:42.539 "ddgst": false 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 },{ 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme8", 00:28:42.539 "trtype": "tcp", 00:28:42.539 "traddr": "10.0.0.2", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "4420", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:42.539 "hdgst": false, 00:28:42.539 "ddgst": false 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 },{ 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme9", 00:28:42.539 "trtype": "tcp", 00:28:42.539 "traddr": "10.0.0.2", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "4420", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:42.539 "hdgst": false, 00:28:42.539 "ddgst": false 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 },{ 00:28:42.539 "params": { 00:28:42.539 "name": "Nvme10", 00:28:42.539 "trtype": "tcp", 00:28:42.539 "traddr": "10.0.0.2", 00:28:42.539 "adrfam": "ipv4", 00:28:42.539 "trsvcid": "4420", 00:28:42.539 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:42.539 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:42.539 "hdgst": false, 00:28:42.539 "ddgst": false 00:28:42.539 }, 00:28:42.539 "method": "bdev_nvme_attach_controller" 00:28:42.539 }' 00:28:42.539 [2024-12-16 02:50:13.001658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:42.539 [2024-12-16 02:50:13.024053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:44.442 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:44.442 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:44.442 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:44.442 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.443 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:44.443 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.443 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1093273 00:28:44.443 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:44.443 02:50:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:45.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1093273 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:45.377 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1093009 00:28:45.377 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:45.377 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:45.377 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:45.377 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:45.377 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.377 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.377 { 00:28:45.377 "params": { 00:28:45.377 "name": "Nvme$subsystem", 00:28:45.377 "trtype": "$TEST_TRANSPORT", 00:28:45.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.377 "adrfam": "ipv4", 00:28:45.377 "trsvcid": "$NVMF_PORT", 00:28:45.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.377 "hdgst": ${hdgst:-false}, 00:28:45.377 "ddgst": ${ddgst:-false} 00:28:45.377 }, 00:28:45.377 "method": "bdev_nvme_attach_controller" 00:28:45.377 } 00:28:45.377 EOF 00:28:45.377 )") 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.378 { 00:28:45.378 "params": { 00:28:45.378 "name": "Nvme$subsystem", 00:28:45.378 "trtype": "$TEST_TRANSPORT", 00:28:45.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.378 "adrfam": "ipv4", 00:28:45.378 "trsvcid": "$NVMF_PORT", 00:28:45.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.378 "hdgst": ${hdgst:-false}, 00:28:45.378 "ddgst": ${ddgst:-false} 00:28:45.378 }, 00:28:45.378 "method": "bdev_nvme_attach_controller" 00:28:45.378 } 00:28:45.378 EOF 00:28:45.378 )") 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.378 { 00:28:45.378 "params": { 00:28:45.378 "name": "Nvme$subsystem", 00:28:45.378 "trtype": "$TEST_TRANSPORT", 00:28:45.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.378 "adrfam": "ipv4", 00:28:45.378 "trsvcid": "$NVMF_PORT", 00:28:45.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.378 "hdgst": ${hdgst:-false}, 00:28:45.378 "ddgst": ${ddgst:-false} 00:28:45.378 }, 00:28:45.378 "method": "bdev_nvme_attach_controller" 00:28:45.378 } 00:28:45.378 EOF 00:28:45.378 )") 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.378 { 00:28:45.378 "params": { 00:28:45.378 "name": "Nvme$subsystem", 00:28:45.378 "trtype": "$TEST_TRANSPORT", 00:28:45.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.378 "adrfam": "ipv4", 00:28:45.378 "trsvcid": "$NVMF_PORT", 00:28:45.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.378 "hdgst": ${hdgst:-false}, 00:28:45.378 "ddgst": ${ddgst:-false} 00:28:45.378 }, 00:28:45.378 "method": "bdev_nvme_attach_controller" 00:28:45.378 } 00:28:45.378 EOF 00:28:45.378 )") 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.378 { 00:28:45.378 "params": { 00:28:45.378 "name": "Nvme$subsystem", 00:28:45.378 "trtype": "$TEST_TRANSPORT", 00:28:45.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.378 "adrfam": "ipv4", 00:28:45.378 "trsvcid": "$NVMF_PORT", 00:28:45.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.378 "hdgst": ${hdgst:-false}, 00:28:45.378 "ddgst": ${ddgst:-false} 00:28:45.378 }, 00:28:45.378 "method": "bdev_nvme_attach_controller" 00:28:45.378 } 00:28:45.378 EOF 00:28:45.378 )") 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.378 { 00:28:45.378 "params": { 00:28:45.378 "name": "Nvme$subsystem", 00:28:45.378 "trtype": "$TEST_TRANSPORT", 00:28:45.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.378 "adrfam": "ipv4", 00:28:45.378 "trsvcid": "$NVMF_PORT", 00:28:45.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.378 "hdgst": ${hdgst:-false}, 00:28:45.378 "ddgst": ${ddgst:-false} 00:28:45.378 }, 00:28:45.378 "method": "bdev_nvme_attach_controller" 00:28:45.378 } 00:28:45.378 EOF 00:28:45.378 )") 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.378 { 00:28:45.378 "params": { 00:28:45.378 "name": "Nvme$subsystem", 00:28:45.378 "trtype": "$TEST_TRANSPORT", 00:28:45.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.378 "adrfam": "ipv4", 00:28:45.378 "trsvcid": "$NVMF_PORT", 00:28:45.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.378 "hdgst": ${hdgst:-false}, 00:28:45.378 "ddgst": ${ddgst:-false} 00:28:45.378 }, 00:28:45.378 "method": "bdev_nvme_attach_controller" 00:28:45.378 } 00:28:45.378 EOF 00:28:45.378 )") 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:45.378 [2024-12-16 02:50:15.847835] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:45.378 [2024-12-16 02:50:15.847895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093751 ] 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.378 { 00:28:45.378 "params": { 00:28:45.378 "name": "Nvme$subsystem", 00:28:45.378 "trtype": "$TEST_TRANSPORT", 00:28:45.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.378 "adrfam": "ipv4", 00:28:45.378 "trsvcid": "$NVMF_PORT", 00:28:45.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.378 "hdgst": ${hdgst:-false}, 00:28:45.378 "ddgst": ${ddgst:-false} 00:28:45.378 }, 00:28:45.378 "method": "bdev_nvme_attach_controller" 00:28:45.378 } 00:28:45.378 EOF 00:28:45.378 )") 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.378 { 00:28:45.378 "params": { 00:28:45.378 "name": "Nvme$subsystem", 00:28:45.378 "trtype": "$TEST_TRANSPORT", 00:28:45.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.378 "adrfam": "ipv4", 00:28:45.378 "trsvcid": "$NVMF_PORT", 00:28:45.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.378 "hdgst": ${hdgst:-false}, 00:28:45.378 "ddgst": ${ddgst:-false} 00:28:45.378 }, 00:28:45.378 "method": "bdev_nvme_attach_controller" 00:28:45.378 } 00:28:45.378 EOF 00:28:45.378 )") 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:45.378 { 00:28:45.378 "params": { 00:28:45.378 "name": "Nvme$subsystem", 00:28:45.378 "trtype": "$TEST_TRANSPORT", 00:28:45.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:45.378 "adrfam": "ipv4", 00:28:45.378 "trsvcid": "$NVMF_PORT", 00:28:45.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:45.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:45.378 "hdgst": ${hdgst:-false}, 00:28:45.378 "ddgst": ${ddgst:-false} 00:28:45.378 }, 00:28:45.378 "method": "bdev_nvme_attach_controller" 00:28:45.378 } 00:28:45.378 EOF 00:28:45.378 )") 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:45.378 02:50:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:45.378 "params": { 00:28:45.378 "name": "Nvme1", 00:28:45.378 "trtype": "tcp", 00:28:45.378 "traddr": "10.0.0.2", 00:28:45.378 "adrfam": "ipv4", 00:28:45.378 "trsvcid": "4420", 00:28:45.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:45.378 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:45.378 "hdgst": false, 00:28:45.378 "ddgst": false 00:28:45.378 }, 00:28:45.378 "method": "bdev_nvme_attach_controller" 00:28:45.378 },{ 00:28:45.378 "params": { 00:28:45.378 "name": "Nvme2", 00:28:45.378 "trtype": "tcp", 00:28:45.378 "traddr": "10.0.0.2", 00:28:45.378 "adrfam": "ipv4", 00:28:45.378 "trsvcid": "4420", 00:28:45.378 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:45.378 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:45.378 "hdgst": false, 00:28:45.378 "ddgst": false 00:28:45.378 }, 00:28:45.379 "method": "bdev_nvme_attach_controller" 00:28:45.379 },{ 00:28:45.379 "params": { 00:28:45.379 "name": "Nvme3", 00:28:45.379 "trtype": "tcp", 00:28:45.379 "traddr": "10.0.0.2", 00:28:45.379 "adrfam": "ipv4", 00:28:45.379 "trsvcid": "4420", 00:28:45.379 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:45.379 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:45.379 "hdgst": false, 00:28:45.379 "ddgst": false 00:28:45.379 }, 00:28:45.379 "method": "bdev_nvme_attach_controller" 00:28:45.379 },{ 00:28:45.379 "params": { 00:28:45.379 "name": "Nvme4", 00:28:45.379 "trtype": "tcp", 00:28:45.379 "traddr": "10.0.0.2", 00:28:45.379 "adrfam": "ipv4", 00:28:45.379 "trsvcid": "4420", 00:28:45.379 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:45.379 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:45.379 "hdgst": false, 00:28:45.379 "ddgst": false 00:28:45.379 }, 00:28:45.379 "method": "bdev_nvme_attach_controller" 00:28:45.379 },{ 00:28:45.379 "params": { 00:28:45.379 "name": "Nvme5", 00:28:45.379 "trtype": "tcp", 00:28:45.379 "traddr": "10.0.0.2", 00:28:45.379 "adrfam": "ipv4", 00:28:45.379 "trsvcid": "4420", 00:28:45.379 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:45.379 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:45.379 "hdgst": false, 00:28:45.379 "ddgst": false 00:28:45.379 }, 00:28:45.379 "method": "bdev_nvme_attach_controller" 00:28:45.379 },{ 00:28:45.379 "params": { 00:28:45.379 "name": "Nvme6", 00:28:45.379 "trtype": "tcp", 00:28:45.379 "traddr": "10.0.0.2", 00:28:45.379 "adrfam": "ipv4", 00:28:45.379 "trsvcid": "4420", 00:28:45.379 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:45.379 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:45.379 "hdgst": false, 00:28:45.379 "ddgst": false 00:28:45.379 }, 00:28:45.379 "method": "bdev_nvme_attach_controller" 00:28:45.379 },{ 00:28:45.379 "params": { 00:28:45.379 "name": "Nvme7", 00:28:45.379 "trtype": "tcp", 00:28:45.379 "traddr": "10.0.0.2", 00:28:45.379 "adrfam": "ipv4", 00:28:45.379 "trsvcid": "4420", 00:28:45.379 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:45.379 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:45.379 "hdgst": false, 00:28:45.379 "ddgst": false 00:28:45.379 }, 00:28:45.379 "method": "bdev_nvme_attach_controller" 00:28:45.379 },{ 00:28:45.379 "params": { 00:28:45.379 "name": "Nvme8", 00:28:45.379 "trtype": "tcp", 00:28:45.379 "traddr": "10.0.0.2", 00:28:45.379 "adrfam": "ipv4", 00:28:45.379 "trsvcid": "4420", 00:28:45.379 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:45.379 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:45.379 "hdgst": false, 00:28:45.379 "ddgst": false 00:28:45.379 }, 00:28:45.379 "method": "bdev_nvme_attach_controller" 00:28:45.379 },{ 00:28:45.379 "params": { 00:28:45.379 "name": "Nvme9", 00:28:45.379 "trtype": "tcp", 00:28:45.379 "traddr": "10.0.0.2", 00:28:45.379 "adrfam": "ipv4", 00:28:45.379 "trsvcid": "4420", 00:28:45.379 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:45.379 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:45.379 "hdgst": false, 00:28:45.379 "ddgst": false 00:28:45.379 }, 00:28:45.379 "method": "bdev_nvme_attach_controller" 00:28:45.379 },{ 00:28:45.379 "params": { 00:28:45.379 "name": "Nvme10", 00:28:45.379 "trtype": "tcp", 00:28:45.379 "traddr": "10.0.0.2", 00:28:45.379 "adrfam": "ipv4", 00:28:45.379 "trsvcid": "4420", 00:28:45.379 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:45.379 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:45.379 "hdgst": false, 00:28:45.379 "ddgst": false 00:28:45.379 }, 00:28:45.379 "method": "bdev_nvme_attach_controller" 00:28:45.379 }' 00:28:45.379 [2024-12-16 02:50:15.922758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:45.379 [2024-12-16 02:50:15.945257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:46.755 Running I/O for 1 seconds... 00:28:48.132 2261.00 IOPS, 141.31 MiB/s 00:28:48.132 Latency(us) 00:28:48.132 [2024-12-16T01:50:18.791Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:48.132 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.132 Verification LBA range: start 0x0 length 0x400 00:28:48.132 Nvme1n1 : 1.06 242.33 15.15 0.00 0.00 261529.11 17850.76 219701.64 00:28:48.132 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.132 Verification LBA range: start 0x0 length 0x400 00:28:48.132 Nvme2n1 : 1.10 300.53 18.78 0.00 0.00 203284.35 12046.14 205720.62 00:28:48.132 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.132 Verification LBA range: start 0x0 length 0x400 00:28:48.133 Nvme3n1 : 1.13 283.60 17.72 0.00 0.00 217326.45 15042.07 243669.09 00:28:48.133 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.133 Verification LBA range: start 0x0 length 0x400 00:28:48.133 Nvme4n1 : 1.13 284.32 17.77 0.00 0.00 213374.29 19099.06 204721.98 00:28:48.133 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.133 Verification LBA range: start 0x0 length 0x400 00:28:48.133 Nvme5n1 : 1.13 285.67 17.85 0.00 0.00 209041.97 2434.19 203723.34 00:28:48.133 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.133 Verification LBA range: start 0x0 length 0x400 00:28:48.133 Nvme6n1 : 1.15 278.59 17.41 0.00 0.00 212043.09 15853.47 220700.28 00:28:48.133 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.133 Verification LBA range: start 0x0 length 0x400 00:28:48.133 Nvme7n1 : 1.14 285.12 17.82 0.00 0.00 203732.83 1950.48 234681.30 00:28:48.133 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.133 Verification LBA range: start 0x0 length 0x400 00:28:48.133 Nvme8n1 : 1.14 281.15 17.57 0.00 0.00 203767.81 13356.86 210713.84 00:28:48.133 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.133 Verification LBA range: start 0x0 length 0x400 00:28:48.133 Nvme9n1 : 1.15 277.51 17.34 0.00 0.00 203698.86 15728.64 237677.23 00:28:48.133 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:48.133 Verification LBA range: start 0x0 length 0x400 00:28:48.133 Nvme10n1 : 1.15 277.22 17.33 0.00 0.00 200345.31 18225.25 217704.35 00:28:48.133 [2024-12-16T01:50:18.792Z] =================================================================================================================== 00:28:48.133 [2024-12-16T01:50:18.792Z] Total : 2796.04 174.75 0.00 0.00 211774.29 1950.48 243669.09 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:48.133 rmmod nvme_tcp 00:28:48.133 rmmod nvme_fabrics 00:28:48.133 rmmod nvme_keyring 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:48.133 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:48.392 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1093009 ']' 00:28:48.392 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1093009 00:28:48.392 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1093009 ']' 00:28:48.392 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1093009 00:28:48.392 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:48.392 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:48.392 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1093009 00:28:48.392 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:48.392 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:48.392 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1093009' 00:28:48.392 killing process with pid 1093009 00:28:48.392 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1093009 00:28:48.392 02:50:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1093009 00:28:48.651 02:50:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:48.651 02:50:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:48.651 02:50:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:48.651 02:50:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:48.651 02:50:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:48.651 02:50:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:48.651 02:50:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:48.651 02:50:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:48.651 02:50:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:48.651 02:50:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.651 02:50:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.651 02:50:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.188 00:28:51.188 real 0m15.216s 00:28:51.188 user 0m33.884s 00:28:51.188 sys 0m5.746s 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:51.188 ************************************ 00:28:51.188 END TEST nvmf_shutdown_tc1 00:28:51.188 ************************************ 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:51.188 ************************************ 00:28:51.188 START TEST nvmf_shutdown_tc2 00:28:51.188 ************************************ 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:51.188 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:51.189 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:51.189 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:51.189 Found net devices under 0000:af:00.0: cvl_0_0 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:51.189 Found net devices under 0000:af:00.1: cvl_0_1 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:51.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:51.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:28:51.189 00:28:51.189 --- 10.0.0.2 ping statistics --- 00:28:51.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.189 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:51.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:51.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:28:51.189 00:28:51.189 --- 10.0.0.1 ping statistics --- 00:28:51.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:51.189 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:51.189 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:51.190 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:51.190 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:51.190 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.190 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.190 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1094757 00:28:51.190 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1094757 00:28:51.190 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:51.190 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1094757 ']' 00:28:51.190 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.190 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.190 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.190 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.190 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.190 [2024-12-16 02:50:21.699368] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:51.190 [2024-12-16 02:50:21.699411] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.190 [2024-12-16 02:50:21.775534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:51.190 [2024-12-16 02:50:21.797913] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.190 [2024-12-16 02:50:21.797949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.190 [2024-12-16 02:50:21.797956] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.190 [2024-12-16 02:50:21.797962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.190 [2024-12-16 02:50:21.797968] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.190 [2024-12-16 02:50:21.799444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:51.190 [2024-12-16 02:50:21.799557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:51.190 [2024-12-16 02:50:21.799660] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:51.190 [2024-12-16 02:50:21.799662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.449 [2024-12-16 02:50:21.931172] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.449 02:50:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.449 Malloc1 00:28:51.449 [2024-12-16 02:50:22.053481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.449 Malloc2 00:28:51.707 Malloc3 00:28:51.708 Malloc4 00:28:51.708 Malloc5 00:28:51.708 Malloc6 00:28:51.708 Malloc7 00:28:51.708 Malloc8 00:28:51.967 Malloc9 00:28:51.967 Malloc10 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1094946 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1094946 /var/tmp/bdevperf.sock 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1094946 ']' 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:51.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.967 { 00:28:51.967 "params": { 00:28:51.967 "name": "Nvme$subsystem", 00:28:51.967 "trtype": "$TEST_TRANSPORT", 00:28:51.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.967 "adrfam": "ipv4", 00:28:51.967 "trsvcid": "$NVMF_PORT", 00:28:51.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.967 "hdgst": ${hdgst:-false}, 00:28:51.967 "ddgst": ${ddgst:-false} 00:28:51.967 }, 00:28:51.967 "method": "bdev_nvme_attach_controller" 00:28:51.967 } 00:28:51.967 EOF 00:28:51.967 )") 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.967 { 00:28:51.967 "params": { 00:28:51.967 "name": "Nvme$subsystem", 00:28:51.967 "trtype": "$TEST_TRANSPORT", 00:28:51.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.967 "adrfam": "ipv4", 00:28:51.967 "trsvcid": "$NVMF_PORT", 00:28:51.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.967 "hdgst": ${hdgst:-false}, 00:28:51.967 "ddgst": ${ddgst:-false} 00:28:51.967 }, 00:28:51.967 "method": "bdev_nvme_attach_controller" 00:28:51.967 } 00:28:51.967 EOF 00:28:51.967 )") 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.967 { 00:28:51.967 "params": { 00:28:51.967 "name": "Nvme$subsystem", 00:28:51.967 "trtype": "$TEST_TRANSPORT", 00:28:51.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.967 "adrfam": "ipv4", 00:28:51.967 "trsvcid": "$NVMF_PORT", 00:28:51.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.967 "hdgst": ${hdgst:-false}, 00:28:51.967 "ddgst": ${ddgst:-false} 00:28:51.967 }, 00:28:51.967 "method": "bdev_nvme_attach_controller" 00:28:51.967 } 00:28:51.967 EOF 00:28:51.967 )") 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.967 { 00:28:51.967 "params": { 00:28:51.967 "name": "Nvme$subsystem", 00:28:51.967 "trtype": "$TEST_TRANSPORT", 00:28:51.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.967 "adrfam": "ipv4", 00:28:51.967 "trsvcid": "$NVMF_PORT", 00:28:51.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.967 "hdgst": ${hdgst:-false}, 00:28:51.967 "ddgst": ${ddgst:-false} 00:28:51.967 }, 00:28:51.967 "method": "bdev_nvme_attach_controller" 00:28:51.967 } 00:28:51.967 EOF 00:28:51.967 )") 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.967 { 00:28:51.967 "params": { 00:28:51.967 "name": "Nvme$subsystem", 00:28:51.967 "trtype": "$TEST_TRANSPORT", 00:28:51.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.967 "adrfam": "ipv4", 00:28:51.967 "trsvcid": "$NVMF_PORT", 00:28:51.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.967 "hdgst": ${hdgst:-false}, 00:28:51.967 "ddgst": ${ddgst:-false} 00:28:51.967 }, 00:28:51.967 "method": "bdev_nvme_attach_controller" 00:28:51.967 } 00:28:51.967 EOF 00:28:51.967 )") 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.967 { 00:28:51.967 "params": { 00:28:51.967 "name": "Nvme$subsystem", 00:28:51.967 "trtype": "$TEST_TRANSPORT", 00:28:51.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.967 "adrfam": "ipv4", 00:28:51.967 "trsvcid": "$NVMF_PORT", 00:28:51.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.967 "hdgst": ${hdgst:-false}, 00:28:51.967 "ddgst": ${ddgst:-false} 00:28:51.967 }, 00:28:51.967 "method": "bdev_nvme_attach_controller" 00:28:51.967 } 00:28:51.967 EOF 00:28:51.967 )") 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.967 { 00:28:51.967 "params": { 00:28:51.967 "name": "Nvme$subsystem", 00:28:51.967 "trtype": "$TEST_TRANSPORT", 00:28:51.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.967 "adrfam": "ipv4", 00:28:51.967 "trsvcid": "$NVMF_PORT", 00:28:51.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.967 "hdgst": ${hdgst:-false}, 00:28:51.967 "ddgst": ${ddgst:-false} 00:28:51.967 }, 00:28:51.967 "method": "bdev_nvme_attach_controller" 00:28:51.967 } 00:28:51.967 EOF 00:28:51.967 )") 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:51.967 [2024-12-16 02:50:22.532353] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:51.967 [2024-12-16 02:50:22.532405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1094946 ] 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.967 { 00:28:51.967 "params": { 00:28:51.967 "name": "Nvme$subsystem", 00:28:51.967 "trtype": "$TEST_TRANSPORT", 00:28:51.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.967 "adrfam": "ipv4", 00:28:51.967 "trsvcid": "$NVMF_PORT", 00:28:51.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.967 "hdgst": ${hdgst:-false}, 00:28:51.967 "ddgst": ${ddgst:-false} 00:28:51.967 }, 00:28:51.967 "method": "bdev_nvme_attach_controller" 00:28:51.967 } 00:28:51.967 EOF 00:28:51.967 )") 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.967 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.967 { 00:28:51.967 "params": { 00:28:51.967 "name": "Nvme$subsystem", 00:28:51.967 "trtype": "$TEST_TRANSPORT", 00:28:51.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.967 "adrfam": "ipv4", 00:28:51.968 "trsvcid": "$NVMF_PORT", 00:28:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.968 "hdgst": ${hdgst:-false}, 00:28:51.968 "ddgst": ${ddgst:-false} 00:28:51.968 }, 00:28:51.968 "method": "bdev_nvme_attach_controller" 00:28:51.968 } 00:28:51.968 EOF 00:28:51.968 )") 00:28:51.968 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:51.968 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.968 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.968 { 00:28:51.968 "params": { 00:28:51.968 "name": "Nvme$subsystem", 00:28:51.968 "trtype": "$TEST_TRANSPORT", 00:28:51.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.968 "adrfam": "ipv4", 00:28:51.968 "trsvcid": "$NVMF_PORT", 00:28:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.968 "hdgst": ${hdgst:-false}, 00:28:51.968 "ddgst": ${ddgst:-false} 00:28:51.968 }, 00:28:51.968 "method": "bdev_nvme_attach_controller" 00:28:51.968 } 00:28:51.968 EOF 00:28:51.968 )") 00:28:51.968 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:51.968 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:51.968 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:51.968 02:50:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:51.968 "params": { 00:28:51.968 "name": "Nvme1", 00:28:51.968 "trtype": "tcp", 00:28:51.968 "traddr": "10.0.0.2", 00:28:51.968 "adrfam": "ipv4", 00:28:51.968 "trsvcid": "4420", 00:28:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:51.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:51.968 "hdgst": false, 00:28:51.968 "ddgst": false 00:28:51.968 }, 00:28:51.968 "method": "bdev_nvme_attach_controller" 00:28:51.968 },{ 00:28:51.968 "params": { 00:28:51.968 "name": "Nvme2", 00:28:51.968 "trtype": "tcp", 00:28:51.968 "traddr": "10.0.0.2", 00:28:51.968 "adrfam": "ipv4", 00:28:51.968 "trsvcid": "4420", 00:28:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:51.968 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:51.968 "hdgst": false, 00:28:51.968 "ddgst": false 00:28:51.968 }, 00:28:51.968 "method": "bdev_nvme_attach_controller" 00:28:51.968 },{ 00:28:51.968 "params": { 00:28:51.968 "name": "Nvme3", 00:28:51.968 "trtype": "tcp", 00:28:51.968 "traddr": "10.0.0.2", 00:28:51.968 "adrfam": "ipv4", 00:28:51.968 "trsvcid": "4420", 00:28:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:51.968 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:51.968 "hdgst": false, 00:28:51.968 "ddgst": false 00:28:51.968 }, 00:28:51.968 "method": "bdev_nvme_attach_controller" 00:28:51.968 },{ 00:28:51.968 "params": { 00:28:51.968 "name": "Nvme4", 00:28:51.968 "trtype": "tcp", 00:28:51.968 "traddr": "10.0.0.2", 00:28:51.968 "adrfam": "ipv4", 00:28:51.968 "trsvcid": "4420", 00:28:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:51.968 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:51.968 "hdgst": false, 00:28:51.968 "ddgst": false 00:28:51.968 }, 00:28:51.968 "method": "bdev_nvme_attach_controller" 00:28:51.968 },{ 00:28:51.968 "params": { 00:28:51.968 "name": "Nvme5", 00:28:51.968 "trtype": "tcp", 00:28:51.968 "traddr": "10.0.0.2", 00:28:51.968 "adrfam": "ipv4", 00:28:51.968 "trsvcid": "4420", 00:28:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:51.968 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:51.968 "hdgst": false, 00:28:51.968 "ddgst": false 00:28:51.968 }, 00:28:51.968 "method": "bdev_nvme_attach_controller" 00:28:51.968 },{ 00:28:51.968 "params": { 00:28:51.968 "name": "Nvme6", 00:28:51.968 "trtype": "tcp", 00:28:51.968 "traddr": "10.0.0.2", 00:28:51.968 "adrfam": "ipv4", 00:28:51.968 "trsvcid": "4420", 00:28:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:51.968 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:51.968 "hdgst": false, 00:28:51.968 "ddgst": false 00:28:51.968 }, 00:28:51.968 "method": "bdev_nvme_attach_controller" 00:28:51.968 },{ 00:28:51.968 "params": { 00:28:51.968 "name": "Nvme7", 00:28:51.968 "trtype": "tcp", 00:28:51.968 "traddr": "10.0.0.2", 00:28:51.968 "adrfam": "ipv4", 00:28:51.968 "trsvcid": "4420", 00:28:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:51.968 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:51.968 "hdgst": false, 00:28:51.968 "ddgst": false 00:28:51.968 }, 00:28:51.968 "method": "bdev_nvme_attach_controller" 00:28:51.968 },{ 00:28:51.968 "params": { 00:28:51.968 "name": "Nvme8", 00:28:51.968 "trtype": "tcp", 00:28:51.968 "traddr": "10.0.0.2", 00:28:51.968 "adrfam": "ipv4", 00:28:51.968 "trsvcid": "4420", 00:28:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:51.968 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:51.968 "hdgst": false, 00:28:51.968 "ddgst": false 00:28:51.968 }, 00:28:51.968 "method": "bdev_nvme_attach_controller" 00:28:51.968 },{ 00:28:51.968 "params": { 00:28:51.968 "name": "Nvme9", 00:28:51.968 "trtype": "tcp", 00:28:51.968 "traddr": "10.0.0.2", 00:28:51.968 "adrfam": "ipv4", 00:28:51.968 "trsvcid": "4420", 00:28:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:51.968 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:51.968 "hdgst": false, 00:28:51.968 "ddgst": false 00:28:51.968 }, 00:28:51.968 "method": "bdev_nvme_attach_controller" 00:28:51.968 },{ 00:28:51.968 "params": { 00:28:51.968 "name": "Nvme10", 00:28:51.968 "trtype": "tcp", 00:28:51.968 "traddr": "10.0.0.2", 00:28:51.968 "adrfam": "ipv4", 00:28:51.968 "trsvcid": "4420", 00:28:51.968 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:51.968 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:51.968 "hdgst": false, 00:28:51.968 "ddgst": false 00:28:51.968 }, 00:28:51.968 "method": "bdev_nvme_attach_controller" 00:28:51.968 }' 00:28:51.968 [2024-12-16 02:50:22.614490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.227 [2024-12-16 02:50:22.637196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.611 Running I/O for 10 seconds... 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:53.869 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:53.870 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:53.870 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.870 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.870 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.870 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:53.870 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:53.870 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:54.128 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:54.128 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:54.128 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:54.128 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:54.128 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.128 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1094946 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1094946 ']' 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1094946 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1094946 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1094946' 00:28:54.387 killing process with pid 1094946 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1094946 00:28:54.387 02:50:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1094946 00:28:54.387 Received shutdown signal, test time was about 0.937034 seconds 00:28:54.387 00:28:54.387 Latency(us) 00:28:54.387 [2024-12-16T01:50:25.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.387 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.387 Verification LBA range: start 0x0 length 0x400 00:28:54.387 Nvme1n1 : 0.89 288.43 18.03 0.00 0.00 219285.94 25465.42 213709.78 00:28:54.387 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.387 Verification LBA range: start 0x0 length 0x400 00:28:54.387 Nvme2n1 : 0.89 287.46 17.97 0.00 0.00 216131.29 18100.42 209715.20 00:28:54.387 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.388 Verification LBA range: start 0x0 length 0x400 00:28:54.388 Nvme3n1 : 0.88 321.64 20.10 0.00 0.00 186660.96 9611.95 210713.84 00:28:54.388 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.388 Verification LBA range: start 0x0 length 0x400 00:28:54.388 Nvme4n1 : 0.87 293.99 18.37 0.00 0.00 203012.14 14417.92 210713.84 00:28:54.388 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.388 Verification LBA range: start 0x0 length 0x400 00:28:54.388 Nvme5n1 : 0.94 273.39 17.09 0.00 0.00 206380.13 16976.94 218702.99 00:28:54.388 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.388 Verification LBA range: start 0x0 length 0x400 00:28:54.388 Nvme6n1 : 0.88 291.48 18.22 0.00 0.00 197461.33 16352.79 208716.56 00:28:54.388 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.388 Verification LBA range: start 0x0 length 0x400 00:28:54.388 Nvme7n1 : 0.88 289.53 18.10 0.00 0.00 195223.77 15853.47 211712.49 00:28:54.388 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.388 Verification LBA range: start 0x0 length 0x400 00:28:54.388 Nvme8n1 : 0.89 286.60 17.91 0.00 0.00 193567.70 16227.96 214708.42 00:28:54.388 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.388 Verification LBA range: start 0x0 length 0x400 00:28:54.388 Nvme9n1 : 0.86 223.10 13.94 0.00 0.00 242268.00 18599.74 226692.14 00:28:54.388 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.388 Verification LBA range: start 0x0 length 0x400 00:28:54.388 Nvme10n1 : 0.87 221.63 13.85 0.00 0.00 239014.93 21470.84 235679.94 00:28:54.388 [2024-12-16T01:50:25.047Z] =================================================================================================================== 00:28:54.388 [2024-12-16T01:50:25.047Z] Total : 2777.25 173.58 0.00 0.00 208053.97 9611.95 235679.94 00:28:54.646 02:50:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1094757 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:55.580 rmmod nvme_tcp 00:28:55.580 rmmod nvme_fabrics 00:28:55.580 rmmod nvme_keyring 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1094757 ']' 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1094757 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1094757 ']' 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1094757 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:55.580 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1094757 00:28:55.838 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:55.838 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:55.838 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1094757' 00:28:55.838 killing process with pid 1094757 00:28:55.838 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1094757 00:28:55.838 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1094757 00:28:56.097 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:56.097 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:56.097 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:56.097 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:56.097 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:56.097 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:56.097 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:56.097 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:56.097 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:56.097 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:56.097 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:56.097 02:50:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.632 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:58.632 00:28:58.632 real 0m7.358s 00:28:58.632 user 0m21.730s 00:28:58.632 sys 0m1.386s 00:28:58.632 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:58.632 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:58.632 ************************************ 00:28:58.632 END TEST nvmf_shutdown_tc2 00:28:58.632 ************************************ 00:28:58.632 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:58.632 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:58.632 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:58.632 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:58.632 ************************************ 00:28:58.632 START TEST nvmf_shutdown_tc3 00:28:58.632 ************************************ 00:28:58.632 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:58.632 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:58.632 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:58.632 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:58.633 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:58.633 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:58.633 Found net devices under 0000:af:00.0: cvl_0_0 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:58.633 Found net devices under 0000:af:00.1: cvl_0_1 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:58.633 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:58.634 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:58.634 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:58.634 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:58.634 02:50:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:28:58.634 00:28:58.634 --- 10.0.0.2 ping statistics --- 00:28:58.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.634 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:28:58.634 00:28:58.634 --- 10.0.0.1 ping statistics --- 00:28:58.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.634 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1096052 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1096052 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1096052 ']' 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.634 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.893 [2024-12-16 02:50:29.290993] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:58.893 [2024-12-16 02:50:29.291043] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.893 [2024-12-16 02:50:29.372791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:58.893 [2024-12-16 02:50:29.395143] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.893 [2024-12-16 02:50:29.395180] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.893 [2024-12-16 02:50:29.395188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.893 [2024-12-16 02:50:29.395194] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.893 [2024-12-16 02:50:29.395200] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.893 [2024-12-16 02:50:29.396662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.893 [2024-12-16 02:50:29.396770] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.893 [2024-12-16 02:50:29.396891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.893 [2024-12-16 02:50:29.396892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.893 [2024-12-16 02:50:29.527975] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.893 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.152 02:50:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.152 Malloc1 00:28:59.152 [2024-12-16 02:50:29.639435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:59.152 Malloc2 00:28:59.152 Malloc3 00:28:59.152 Malloc4 00:28:59.152 Malloc5 00:28:59.411 Malloc6 00:28:59.411 Malloc7 00:28:59.411 Malloc8 00:28:59.411 Malloc9 00:28:59.411 Malloc10 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1096319 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1096319 /var/tmp/bdevperf.sock 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1096319 ']' 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:59.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.411 { 00:28:59.411 "params": { 00:28:59.411 "name": "Nvme$subsystem", 00:28:59.411 "trtype": "$TEST_TRANSPORT", 00:28:59.411 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.411 "adrfam": "ipv4", 00:28:59.411 "trsvcid": "$NVMF_PORT", 00:28:59.411 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.411 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.411 "hdgst": ${hdgst:-false}, 00:28:59.411 "ddgst": ${ddgst:-false} 00:28:59.411 }, 00:28:59.411 "method": "bdev_nvme_attach_controller" 00:28:59.411 } 00:28:59.411 EOF 00:28:59.411 )") 00:28:59.411 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.670 { 00:28:59.670 "params": { 00:28:59.670 "name": "Nvme$subsystem", 00:28:59.670 "trtype": "$TEST_TRANSPORT", 00:28:59.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.670 "adrfam": "ipv4", 00:28:59.670 "trsvcid": "$NVMF_PORT", 00:28:59.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.670 "hdgst": ${hdgst:-false}, 00:28:59.670 "ddgst": ${ddgst:-false} 00:28:59.670 }, 00:28:59.670 "method": "bdev_nvme_attach_controller" 00:28:59.670 } 00:28:59.670 EOF 00:28:59.670 )") 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.670 { 00:28:59.670 "params": { 00:28:59.670 "name": "Nvme$subsystem", 00:28:59.670 "trtype": "$TEST_TRANSPORT", 00:28:59.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.670 "adrfam": "ipv4", 00:28:59.670 "trsvcid": "$NVMF_PORT", 00:28:59.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.670 "hdgst": ${hdgst:-false}, 00:28:59.670 "ddgst": ${ddgst:-false} 00:28:59.670 }, 00:28:59.670 "method": "bdev_nvme_attach_controller" 00:28:59.670 } 00:28:59.670 EOF 00:28:59.670 )") 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.670 { 00:28:59.670 "params": { 00:28:59.670 "name": "Nvme$subsystem", 00:28:59.670 "trtype": "$TEST_TRANSPORT", 00:28:59.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.670 "adrfam": "ipv4", 00:28:59.670 "trsvcid": "$NVMF_PORT", 00:28:59.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.670 "hdgst": ${hdgst:-false}, 00:28:59.670 "ddgst": ${ddgst:-false} 00:28:59.670 }, 00:28:59.670 "method": "bdev_nvme_attach_controller" 00:28:59.670 } 00:28:59.670 EOF 00:28:59.670 )") 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.670 { 00:28:59.670 "params": { 00:28:59.670 "name": "Nvme$subsystem", 00:28:59.670 "trtype": "$TEST_TRANSPORT", 00:28:59.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.670 "adrfam": "ipv4", 00:28:59.670 "trsvcid": "$NVMF_PORT", 00:28:59.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.670 "hdgst": ${hdgst:-false}, 00:28:59.670 "ddgst": ${ddgst:-false} 00:28:59.670 }, 00:28:59.670 "method": "bdev_nvme_attach_controller" 00:28:59.670 } 00:28:59.670 EOF 00:28:59.670 )") 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.670 { 00:28:59.670 "params": { 00:28:59.670 "name": "Nvme$subsystem", 00:28:59.670 "trtype": "$TEST_TRANSPORT", 00:28:59.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.670 "adrfam": "ipv4", 00:28:59.670 "trsvcid": "$NVMF_PORT", 00:28:59.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.670 "hdgst": ${hdgst:-false}, 00:28:59.670 "ddgst": ${ddgst:-false} 00:28:59.670 }, 00:28:59.670 "method": "bdev_nvme_attach_controller" 00:28:59.670 } 00:28:59.670 EOF 00:28:59.670 )") 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.670 { 00:28:59.670 "params": { 00:28:59.670 "name": "Nvme$subsystem", 00:28:59.670 "trtype": "$TEST_TRANSPORT", 00:28:59.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.670 "adrfam": "ipv4", 00:28:59.670 "trsvcid": "$NVMF_PORT", 00:28:59.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.670 "hdgst": ${hdgst:-false}, 00:28:59.670 "ddgst": ${ddgst:-false} 00:28:59.670 }, 00:28:59.670 "method": "bdev_nvme_attach_controller" 00:28:59.670 } 00:28:59.670 EOF 00:28:59.670 )") 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:59.670 [2024-12-16 02:50:30.111801] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:59.670 [2024-12-16 02:50:30.111858] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1096319 ] 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.670 { 00:28:59.670 "params": { 00:28:59.670 "name": "Nvme$subsystem", 00:28:59.670 "trtype": "$TEST_TRANSPORT", 00:28:59.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.670 "adrfam": "ipv4", 00:28:59.670 "trsvcid": "$NVMF_PORT", 00:28:59.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.670 "hdgst": ${hdgst:-false}, 00:28:59.670 "ddgst": ${ddgst:-false} 00:28:59.670 }, 00:28:59.670 "method": "bdev_nvme_attach_controller" 00:28:59.670 } 00:28:59.670 EOF 00:28:59.670 )") 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.670 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.670 { 00:28:59.670 "params": { 00:28:59.670 "name": "Nvme$subsystem", 00:28:59.671 "trtype": "$TEST_TRANSPORT", 00:28:59.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.671 "adrfam": "ipv4", 00:28:59.671 "trsvcid": "$NVMF_PORT", 00:28:59.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.671 "hdgst": ${hdgst:-false}, 00:28:59.671 "ddgst": ${ddgst:-false} 00:28:59.671 }, 00:28:59.671 "method": "bdev_nvme_attach_controller" 00:28:59.671 } 00:28:59.671 EOF 00:28:59.671 )") 00:28:59.671 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:59.671 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:59.671 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:59.671 { 00:28:59.671 "params": { 00:28:59.671 "name": "Nvme$subsystem", 00:28:59.671 "trtype": "$TEST_TRANSPORT", 00:28:59.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:59.671 "adrfam": "ipv4", 00:28:59.671 "trsvcid": "$NVMF_PORT", 00:28:59.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:59.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:59.671 "hdgst": ${hdgst:-false}, 00:28:59.671 "ddgst": ${ddgst:-false} 00:28:59.671 }, 00:28:59.671 "method": "bdev_nvme_attach_controller" 00:28:59.671 } 00:28:59.671 EOF 00:28:59.671 )") 00:28:59.671 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:59.671 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:59.671 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:59.671 02:50:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:59.671 "params": { 00:28:59.671 "name": "Nvme1", 00:28:59.671 "trtype": "tcp", 00:28:59.671 "traddr": "10.0.0.2", 00:28:59.671 "adrfam": "ipv4", 00:28:59.671 "trsvcid": "4420", 00:28:59.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:59.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:59.671 "hdgst": false, 00:28:59.671 "ddgst": false 00:28:59.671 }, 00:28:59.671 "method": "bdev_nvme_attach_controller" 00:28:59.671 },{ 00:28:59.671 "params": { 00:28:59.671 "name": "Nvme2", 00:28:59.671 "trtype": "tcp", 00:28:59.671 "traddr": "10.0.0.2", 00:28:59.671 "adrfam": "ipv4", 00:28:59.671 "trsvcid": "4420", 00:28:59.671 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:59.671 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:59.671 "hdgst": false, 00:28:59.671 "ddgst": false 00:28:59.671 }, 00:28:59.671 "method": "bdev_nvme_attach_controller" 00:28:59.671 },{ 00:28:59.671 "params": { 00:28:59.671 "name": "Nvme3", 00:28:59.671 "trtype": "tcp", 00:28:59.671 "traddr": "10.0.0.2", 00:28:59.671 "adrfam": "ipv4", 00:28:59.671 "trsvcid": "4420", 00:28:59.671 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:59.671 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:59.671 "hdgst": false, 00:28:59.671 "ddgst": false 00:28:59.671 }, 00:28:59.671 "method": "bdev_nvme_attach_controller" 00:28:59.671 },{ 00:28:59.671 "params": { 00:28:59.671 "name": "Nvme4", 00:28:59.671 "trtype": "tcp", 00:28:59.671 "traddr": "10.0.0.2", 00:28:59.671 "adrfam": "ipv4", 00:28:59.671 "trsvcid": "4420", 00:28:59.671 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:59.671 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:59.671 "hdgst": false, 00:28:59.671 "ddgst": false 00:28:59.671 }, 00:28:59.671 "method": "bdev_nvme_attach_controller" 00:28:59.671 },{ 00:28:59.671 "params": { 00:28:59.671 "name": "Nvme5", 00:28:59.671 "trtype": "tcp", 00:28:59.671 "traddr": "10.0.0.2", 00:28:59.671 "adrfam": "ipv4", 00:28:59.671 "trsvcid": "4420", 00:28:59.671 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:59.671 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:59.671 "hdgst": false, 00:28:59.671 "ddgst": false 00:28:59.671 }, 00:28:59.671 "method": "bdev_nvme_attach_controller" 00:28:59.671 },{ 00:28:59.671 "params": { 00:28:59.671 "name": "Nvme6", 00:28:59.671 "trtype": "tcp", 00:28:59.671 "traddr": "10.0.0.2", 00:28:59.671 "adrfam": "ipv4", 00:28:59.671 "trsvcid": "4420", 00:28:59.671 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:59.671 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:59.671 "hdgst": false, 00:28:59.671 "ddgst": false 00:28:59.671 }, 00:28:59.671 "method": "bdev_nvme_attach_controller" 00:28:59.671 },{ 00:28:59.671 "params": { 00:28:59.671 "name": "Nvme7", 00:28:59.671 "trtype": "tcp", 00:28:59.671 "traddr": "10.0.0.2", 00:28:59.671 "adrfam": "ipv4", 00:28:59.671 "trsvcid": "4420", 00:28:59.671 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:59.671 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:59.671 "hdgst": false, 00:28:59.671 "ddgst": false 00:28:59.671 }, 00:28:59.671 "method": "bdev_nvme_attach_controller" 00:28:59.671 },{ 00:28:59.671 "params": { 00:28:59.671 "name": "Nvme8", 00:28:59.671 "trtype": "tcp", 00:28:59.671 "traddr": "10.0.0.2", 00:28:59.671 "adrfam": "ipv4", 00:28:59.671 "trsvcid": "4420", 00:28:59.671 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:59.671 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:59.671 "hdgst": false, 00:28:59.671 "ddgst": false 00:28:59.671 }, 00:28:59.671 "method": "bdev_nvme_attach_controller" 00:28:59.671 },{ 00:28:59.671 "params": { 00:28:59.671 "name": "Nvme9", 00:28:59.671 "trtype": "tcp", 00:28:59.671 "traddr": "10.0.0.2", 00:28:59.671 "adrfam": "ipv4", 00:28:59.671 "trsvcid": "4420", 00:28:59.671 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:59.671 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:59.671 "hdgst": false, 00:28:59.671 "ddgst": false 00:28:59.671 }, 00:28:59.671 "method": "bdev_nvme_attach_controller" 00:28:59.671 },{ 00:28:59.671 "params": { 00:28:59.671 "name": "Nvme10", 00:28:59.671 "trtype": "tcp", 00:28:59.671 "traddr": "10.0.0.2", 00:28:59.671 "adrfam": "ipv4", 00:28:59.671 "trsvcid": "4420", 00:28:59.671 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:59.671 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:59.671 "hdgst": false, 00:28:59.671 "ddgst": false 00:28:59.671 }, 00:28:59.671 "method": "bdev_nvme_attach_controller" 00:28:59.671 }' 00:28:59.671 [2024-12-16 02:50:30.190522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.671 [2024-12-16 02:50:30.212891] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:01.575 Running I/O for 10 seconds... 00:29:01.575 02:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:01.575 02:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:01.575 02:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:01.575 02:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.575 02:50:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:01.575 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:01.833 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:01.833 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:01.833 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:01.834 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:01.834 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.834 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:01.834 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.834 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:01.834 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:01.834 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:02.092 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:02.092 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:02.092 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:02.092 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:02.092 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.092 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1096052 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1096052 ']' 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1096052 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1096052 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1096052' 00:29:02.367 killing process with pid 1096052 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1096052 00:29:02.367 02:50:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1096052 00:29:02.367 [2024-12-16 02:50:32.843014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.367 [2024-12-16 02:50:32.843059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.843402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179bf00 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.368 [2024-12-16 02:50:32.844706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-16 02:50:32.844714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.368 the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.368 [2024-12-16 02:50:32.844733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.368 [2024-12-16 02:50:32.844740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.368 [2024-12-16 02:50:32.844747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.368 [2024-12-16 02:50:32.844755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.368 [2024-12-16 02:50:32.844762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.368 [2024-12-16 02:50:32.844769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363140 is same [2024-12-16 02:50:32.844777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with with the state(6) to be set 00:29:02.368 the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.368 [2024-12-16 02:50:32.844811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844933] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844939] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.844998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.845007] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.845014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.845021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.845026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.845033] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.845038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.845044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.845050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.845056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.845061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.845068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.845074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a10a70 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1[2024-12-16 02:50:32.847323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.369 the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.369 [2024-12-16 02:50:32.847339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:1[2024-12-16 02:50:32.847353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.369 the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.369 [2024-12-16 02:50:32.847368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.369 [2024-12-16 02:50:32.847375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-16 02:50:32.847382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.369 the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.369 [2024-12-16 02:50:32.847392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with [2024-12-16 02:50:32.847400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:02.370 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128[2024-12-16 02:50:32.847498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-16 02:50:32.847506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with [2024-12-16 02:50:32.847553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128the state(6) to be set 00:29:02.370 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128[2024-12-16 02:50:32.847590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179c8c0 is same with the state(6) to be set 00:29:02.370 [2024-12-16 02:50:32.847620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.370 [2024-12-16 02:50:32.847832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.370 [2024-12-16 02:50:32.847840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.847851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.847860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.847867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.847876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.847882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.847890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.847897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.847905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.847911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.847919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.847926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.847934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.847941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.847949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.847955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.847963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.847969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.847978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.847985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.847993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.371 [2024-12-16 02:50:32.848349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.371 [2024-12-16 02:50:32.848782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.371 [2024-12-16 02:50:32.848900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.848995] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849120] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849126] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.849202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179cdb0 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.850020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.850042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.850050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.372 [2024-12-16 02:50:32.850057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850114] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850404] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d280 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.850961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:02.373 [2024-12-16 02:50:32.851018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362cd0 (9): Bad file descriptor 00:29:02.373 [2024-12-16 02:50:32.851421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851453] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851503] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:02.373 [2024-12-16 02:50:32.851516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.373 [2024-12-16 02:50:32.851588] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.851829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179d600 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.852490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.374 [2024-12-16 02:50:32.852516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1362cd0 with addr=10.0.0.2, port=4420 00:29:02.374 [2024-12-16 02:50:32.852528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362cd0 is same with the state(6) to be set 00:29:02.374 [2024-12-16 02:50:32.852549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.374 [2024-12-16 02:50:32.852874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.374 [2024-12-16 02:50:32.852881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.852889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.852895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.852903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.852909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.852918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.852925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.852933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.852941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.852949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.852955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.852963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.852970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.852978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.852984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.852994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.853000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.853014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-16 02:50:32.853010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.853026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853034] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:12[2024-12-16 02:50:32.853042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853050] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.853063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:12[2024-12-16 02:50:32.853077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with [2024-12-16 02:50:32.853087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:02.375 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.853102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.853116] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with [2024-12-16 02:50:32.853130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:12the state(6) to be set 00:29:02.375 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.853140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with [2024-12-16 02:50:32.853141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:02.375 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.853157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.853170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:12[2024-12-16 02:50:32.853187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-16 02:50:32.853195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.853211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.853225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:12[2024-12-16 02:50:32.853240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.853265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with [2024-12-16 02:50:32.853266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:02.375 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.375 [2024-12-16 02:50:32.853273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.375 [2024-12-16 02:50:32.853280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.375 [2024-12-16 02:50:32.853285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:12[2024-12-16 02:50:32.853295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:12[2024-12-16 02:50:32.853346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with [2024-12-16 02:50:32.853358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:29:02.376 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with [2024-12-16 02:50:32.853402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:12the state(6) to be set 00:29:02.376 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:12[2024-12-16 02:50:32.853457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-16 02:50:32.853466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dad0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.376 [2024-12-16 02:50:32.853751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.376 [2024-12-16 02:50:32.853804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1888780 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.853975] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:02.376 [2024-12-16 02:50:32.854336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.854352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.854359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.854365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.376 [2024-12-16 02:50:32.854371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.854377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.854385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.854391] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.854397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.854403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.854452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.854508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.854562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.854614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.854643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.854662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.854722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.854788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.854839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.854905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.854962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.855022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.855074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.855126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.855180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.855237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.855289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.855343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.855395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.855460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.855510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.855565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.855620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.855675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.855730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.855783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.855838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.855909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.855963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.856016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.856072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.377 [2024-12-16 02:50:32.856126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.856179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.856231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.856268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.856305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.856338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.856373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.856407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.856443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.856476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.856511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.856549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.856587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.856624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.856661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.856695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.856730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.856765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.856801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.856839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.856881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.856915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.856951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.856985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.857021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.857055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.857094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.857128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.857163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.857197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.857233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.857268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.857303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.857341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.857377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.857411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.857447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.857479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.857515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.857550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.857590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.857624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.857660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.857694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.377 [2024-12-16 02:50:32.857731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.377 [2024-12-16 02:50:32.857765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.857801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.857838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.857879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.857913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.857949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.857984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.858956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.858990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.859934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.859966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.860005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.860036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.860073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.860104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.860139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.378 [2024-12-16 02:50:32.860170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.860205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2b85a80 is same with the state(6) to be set 00:29:02.378 [2024-12-16 02:50:32.860303] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362cd0 (9): Bad file descriptor 00:29:02.378 [2024-12-16 02:50:32.860340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.378 [2024-12-16 02:50:32.860350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.860357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.378 [2024-12-16 02:50:32.860364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.860401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.378 [2024-12-16 02:50:32.860434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.860472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.378 [2024-12-16 02:50:32.860504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.378 [2024-12-16 02:50:32.860537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5430 is same with the state(6) to be set 00:29:02.378 [2024-12-16 02:50:32.860592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.378 [2024-12-16 02:50:32.860609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.860643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.860675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.867954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.867969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.867979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.867988] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.867996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.868255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x179dfc0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.872356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bfc50 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.872428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17db160 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.872539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126e610 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.872646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872705] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcde0 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.872752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.379 [2024-12-16 02:50:32.872824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.379 [2024-12-16 02:50:32.872833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1359260 is same with the state(6) to be set 00:29:02.379 [2024-12-16 02:50:32.872867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.380 [2024-12-16 02:50:32.872879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.872889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.380 [2024-12-16 02:50:32.872899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.872908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.380 [2024-12-16 02:50:32.872917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.872928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.380 [2024-12-16 02:50:32.872936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.872945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361440 is same with the state(6) to be set 00:29:02.380 [2024-12-16 02:50:32.872974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.380 [2024-12-16 02:50:32.872985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.872996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.380 [2024-12-16 02:50:32.873004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.873015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.380 [2024-12-16 02:50:32.873024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.873034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:02.380 [2024-12-16 02:50:32.873043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.873052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178f130 is same with the state(6) to be set 00:29:02.380 [2024-12-16 02:50:32.873071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1363140 (9): Bad file descriptor 00:29:02.380 [2024-12-16 02:50:32.874376] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:02.380 [2024-12-16 02:50:32.876252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:02.380 [2024-12-16 02:50:32.876301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:02.380 [2024-12-16 02:50:32.876328] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:02.380 [2024-12-16 02:50:32.876340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:02.380 [2024-12-16 02:50:32.876350] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:02.380 [2024-12-16 02:50:32.876395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5430 (9): Bad file descriptor 00:29:02.380 [2024-12-16 02:50:32.876419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bfc50 (9): Bad file descriptor 00:29:02.380 [2024-12-16 02:50:32.876437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17db160 (9): Bad file descriptor 00:29:02.380 [2024-12-16 02:50:32.876454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126e610 (9): Bad file descriptor 00:29:02.380 [2024-12-16 02:50:32.876476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dcde0 (9): Bad file descriptor 00:29:02.380 [2024-12-16 02:50:32.876499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1359260 (9): Bad file descriptor 00:29:02.380 [2024-12-16 02:50:32.876518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1361440 (9): Bad file descriptor 00:29:02.380 [2024-12-16 02:50:32.876537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178f130 (9): Bad file descriptor 00:29:02.380 [2024-12-16 02:50:32.876650] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:02.380 [2024-12-16 02:50:32.876899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.876919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.876936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.876946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.876959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.876969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.876981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.876991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.380 [2024-12-16 02:50:32.877473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.380 [2024-12-16 02:50:32.877485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.877976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.877989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.381 [2024-12-16 02:50:32.878522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.381 [2024-12-16 02:50:32.878535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.878551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.878564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.878581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.878594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.878610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.878623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.878640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.878653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.878757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:02.382 [2024-12-16 02:50:32.879047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-12-16 02:50:32.879071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1363140 with addr=10.0.0.2, port=4420 00:29:02.382 [2024-12-16 02:50:32.879085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363140 is same with the state(6) to be set 00:29:02.382 [2024-12-16 02:50:32.881732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:02.382 [2024-12-16 02:50:32.882034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-12-16 02:50:32.882062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17bfc50 with addr=10.0.0.2, port=4420 00:29:02.382 [2024-12-16 02:50:32.882075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bfc50 is same with the state(6) to be set 00:29:02.382 [2024-12-16 02:50:32.882094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1363140 (9): Bad file descriptor 00:29:02.382 [2024-12-16 02:50:32.882207] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:02.382 [2024-12-16 02:50:32.882268] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:02.382 [2024-12-16 02:50:32.882378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:02.382 [2024-12-16 02:50:32.882662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-12-16 02:50:32.882686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1362cd0 with addr=10.0.0.2, port=4420 00:29:02.382 [2024-12-16 02:50:32.882700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362cd0 is same with the state(6) to be set 00:29:02.382 [2024-12-16 02:50:32.882722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bfc50 (9): Bad file descriptor 00:29:02.382 [2024-12-16 02:50:32.882737] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:02.382 [2024-12-16 02:50:32.882749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:02.382 [2024-12-16 02:50:32.882764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:02.382 [2024-12-16 02:50:32.882777] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:02.382 [2024-12-16 02:50:32.882910] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:02.382 [2024-12-16 02:50:32.883475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.382 [2024-12-16 02:50:32.883499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17db160 with addr=10.0.0.2, port=4420 00:29:02.382 [2024-12-16 02:50:32.883513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17db160 is same with the state(6) to be set 00:29:02.382 [2024-12-16 02:50:32.883531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362cd0 (9): Bad file descriptor 00:29:02.382 [2024-12-16 02:50:32.883546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:02.382 [2024-12-16 02:50:32.883559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:02.382 [2024-12-16 02:50:32.883571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:02.382 [2024-12-16 02:50:32.883583] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:02.382 [2024-12-16 02:50:32.883664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17db160 (9): Bad file descriptor 00:29:02.382 [2024-12-16 02:50:32.883682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:02.382 [2024-12-16 02:50:32.883695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:02.382 [2024-12-16 02:50:32.883707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:02.382 [2024-12-16 02:50:32.883719] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:02.382 [2024-12-16 02:50:32.883783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:02.382 [2024-12-16 02:50:32.883798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:02.382 [2024-12-16 02:50:32.883810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:02.382 [2024-12-16 02:50:32.883821] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:02.382 [2024-12-16 02:50:32.886485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.886972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.886985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.887000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.887013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.887028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.887042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.887057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.382 [2024-12-16 02:50:32.887070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.382 [2024-12-16 02:50:32.887086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.383 [2024-12-16 02:50:32.887967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.383 [2024-12-16 02:50:32.887978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.887986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.887996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.888005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.888016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.888026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.888037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.888045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.888055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1568200 is same with the state(6) to be set 00:29:02.384 [2024-12-16 02:50:32.889277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.889988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.889999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.384 [2024-12-16 02:50:32.890007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.384 [2024-12-16 02:50:32.890018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.890553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.890563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1767a50 is same with the state(6) to be set 00:29:02.385 [2024-12-16 02:50:32.891777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.891797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.891811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.891820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.891831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.891840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.891856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.891865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.891876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.891885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.891896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.891904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.891915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.891924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.385 [2024-12-16 02:50:32.891935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.385 [2024-12-16 02:50:32.891945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.891956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.891965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.891976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.891985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.891997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.386 [2024-12-16 02:50:32.892733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.386 [2024-12-16 02:50:32.892743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.892752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.892765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.892774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.892785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.892793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.892804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.892813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.892823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.892831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.892841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.892854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.892864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.892873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.892884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.892892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.892903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.892912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.892922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.892931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.892943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.892952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.892962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.892971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.892983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.892992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.893002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.893013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.893025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.893034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.893044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.893053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.893062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1768d60 is same with the state(6) to be set 00:29:02.387 [2024-12-16 02:50:32.894289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.387 [2024-12-16 02:50:32.894770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.387 [2024-12-16 02:50:32.894781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.894790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.894802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.894811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.894822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.894831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.894842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.894856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.894866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.894875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.894886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.894895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.894906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.894915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.894926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.894935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.894947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.894956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.894966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.894975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.894985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.894994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.388 [2024-12-16 02:50:32.895566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.388 [2024-12-16 02:50:32.895576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x176a090 is same with the state(6) to be set 00:29:02.388 [2024-12-16 02:50:32.896790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.896810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.896825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.896835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.896851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.896861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.896873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.896882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.896893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.896902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.896913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.896922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.896933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.896942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.896954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.896963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.896977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.896987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.896997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.389 [2024-12-16 02:50:32.897593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.389 [2024-12-16 02:50:32.897601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.897975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.897983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2464730 is same with the state(6) to be set 00:29:02.390 [2024-12-16 02:50:32.898973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.898987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.898999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.899007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.899016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.899024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.899033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.899040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.899050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.899057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.899066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.899074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.899086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.899094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.899102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.899110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.899118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.899126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.899134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.899142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.899150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.899158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.899166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.390 [2024-12-16 02:50:32.899173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.390 [2024-12-16 02:50:32.899182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.391 [2024-12-16 02:50:32.899841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.391 [2024-12-16 02:50:32.899852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.392 [2024-12-16 02:50:32.899862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.392 [2024-12-16 02:50:32.899869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.392 [2024-12-16 02:50:32.899877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.392 [2024-12-16 02:50:32.899885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.392 [2024-12-16 02:50:32.899893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.392 [2024-12-16 02:50:32.899901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.392 [2024-12-16 02:50:32.899912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.392 [2024-12-16 02:50:32.899919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.392 [2024-12-16 02:50:32.899928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.392 [2024-12-16 02:50:32.899935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.392 [2024-12-16 02:50:32.899944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.392 [2024-12-16 02:50:32.899951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.392 [2024-12-16 02:50:32.899960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.392 [2024-12-16 02:50:32.899966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.392 [2024-12-16 02:50:32.899975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.392 [2024-12-16 02:50:32.899982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.392 [2024-12-16 02:50:32.899992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.392 [2024-12-16 02:50:32.899999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.392 [2024-12-16 02:50:32.900008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.392 [2024-12-16 02:50:32.900015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.392 [2024-12-16 02:50:32.900025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:02.392 [2024-12-16 02:50:32.900032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:02.392 [2024-12-16 02:50:32.900040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26b1fa0 is same with the state(6) to be set 00:29:02.392 [2024-12-16 02:50:32.901006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:02.392 [2024-12-16 02:50:32.901026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:02.392 [2024-12-16 02:50:32.901037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:02.392 [2024-12-16 02:50:32.901102] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:02.392 [2024-12-16 02:50:32.901116] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:02.392 [2024-12-16 02:50:32.901127] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:02.392 [2024-12-16 02:50:32.901140] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:29:02.392 [2024-12-16 02:50:32.901200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:02.392 [2024-12-16 02:50:32.901213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:02.392 task offset: 31872 on job bdev=Nvme2n1 fails 00:29:02.392 00:29:02.392 Latency(us) 00:29:02.392 [2024-12-16T01:50:33.051Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:02.392 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.392 Job: Nvme1n1 ended in about 0.90 seconds with error 00:29:02.392 Verification LBA range: start 0x0 length 0x400 00:29:02.392 Nvme1n1 : 0.90 216.85 13.55 71.17 0.00 219877.32 16727.28 213709.78 00:29:02.392 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.392 Job: Nvme2n1 ended in about 0.88 seconds with error 00:29:02.392 Verification LBA range: start 0x0 length 0x400 00:29:02.392 Nvme2n1 : 0.88 219.28 13.70 73.09 0.00 212638.17 3666.90 214708.42 00:29:02.392 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.392 Job: Nvme3n1 ended in about 0.91 seconds with error 00:29:02.392 Verification LBA range: start 0x0 length 0x400 00:29:02.392 Nvme3n1 : 0.91 210.03 13.13 70.01 0.00 218425.17 14854.83 221698.93 00:29:02.392 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.392 Job: Nvme4n1 ended in about 0.92 seconds with error 00:29:02.392 Verification LBA range: start 0x0 length 0x400 00:29:02.392 Nvme4n1 : 0.92 209.45 13.09 69.82 0.00 215170.19 14355.50 211712.49 00:29:02.392 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.392 Job: Nvme5n1 ended in about 0.92 seconds with error 00:29:02.392 Verification LBA range: start 0x0 length 0x400 00:29:02.392 Nvme5n1 : 0.92 208.89 13.06 69.63 0.00 211907.78 16103.13 204721.98 00:29:02.392 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.392 Job: Nvme6n1 ended in about 0.92 seconds with error 00:29:02.392 Verification LBA range: start 0x0 length 0x400 00:29:02.392 Nvme6n1 : 0.92 208.32 13.02 69.44 0.00 208740.21 28835.84 196732.83 00:29:02.392 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.392 Job: Nvme7n1 ended in about 0.92 seconds with error 00:29:02.392 Verification LBA range: start 0x0 length 0x400 00:29:02.392 Nvme7n1 : 0.92 207.79 12.99 69.26 0.00 205503.15 16103.13 214708.42 00:29:02.392 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.392 Job: Nvme8n1 ended in about 0.93 seconds with error 00:29:02.392 Verification LBA range: start 0x0 length 0x400 00:29:02.392 Nvme8n1 : 0.93 207.33 12.96 69.11 0.00 202092.13 14168.26 213709.78 00:29:02.392 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.392 Job: Nvme9n1 ended in about 0.91 seconds with error 00:29:02.392 Verification LBA range: start 0x0 length 0x400 00:29:02.392 Nvme9n1 : 0.91 211.93 13.25 70.64 0.00 193163.03 5055.63 217704.35 00:29:02.392 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:02.392 Job: Nvme10n1 ended in about 0.90 seconds with error 00:29:02.392 Verification LBA range: start 0x0 length 0x400 00:29:02.392 Nvme10n1 : 0.90 154.24 9.64 71.02 0.00 237445.71 16352.79 236678.58 00:29:02.392 [2024-12-16T01:50:33.051Z] =================================================================================================================== 00:29:02.392 [2024-12-16T01:50:33.051Z] Total : 2054.09 128.38 703.19 0.00 211978.29 3666.90 236678.58 00:29:02.392 [2024-12-16 02:50:32.931061] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:02.392 [2024-12-16 02:50:32.931108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:02.392 [2024-12-16 02:50:32.931127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:02.392 [2024-12-16 02:50:32.931430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-12-16 02:50:32.931449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1359260 with addr=10.0.0.2, port=4420 00:29:02.392 [2024-12-16 02:50:32.931461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1359260 is same with the state(6) to be set 00:29:02.392 [2024-12-16 02:50:32.931606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-12-16 02:50:32.931617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x178f130 with addr=10.0.0.2, port=4420 00:29:02.392 [2024-12-16 02:50:32.931625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178f130 is same with the state(6) to be set 00:29:02.392 [2024-12-16 02:50:32.931850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-12-16 02:50:32.931864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1361440 with addr=10.0.0.2, port=4420 00:29:02.392 [2024-12-16 02:50:32.931871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1361440 is same with the state(6) to be set 00:29:02.392 [2024-12-16 02:50:32.933187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:02.392 [2024-12-16 02:50:32.933206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:02.392 [2024-12-16 02:50:32.933493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-12-16 02:50:32.933509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17dcde0 with addr=10.0.0.2, port=4420 00:29:02.392 [2024-12-16 02:50:32.933517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dcde0 is same with the state(6) to be set 00:29:02.392 [2024-12-16 02:50:32.933711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-12-16 02:50:32.933723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c5430 with addr=10.0.0.2, port=4420 00:29:02.392 [2024-12-16 02:50:32.933731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c5430 is same with the state(6) to be set 00:29:02.392 [2024-12-16 02:50:32.933947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-12-16 02:50:32.933960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x126e610 with addr=10.0.0.2, port=4420 00:29:02.392 [2024-12-16 02:50:32.933968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x126e610 is same with the state(6) to be set 00:29:02.392 [2024-12-16 02:50:32.934133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.392 [2024-12-16 02:50:32.934146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1363140 with addr=10.0.0.2, port=4420 00:29:02.392 [2024-12-16 02:50:32.934153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1363140 is same with the state(6) to be set 00:29:02.392 [2024-12-16 02:50:32.934168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1359260 (9): Bad file descriptor 00:29:02.392 [2024-12-16 02:50:32.934179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178f130 (9): Bad file descriptor 00:29:02.393 [2024-12-16 02:50:32.934188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1361440 (9): Bad file descriptor 00:29:02.393 [2024-12-16 02:50:32.934215] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:02.393 [2024-12-16 02:50:32.934232] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:29:02.393 [2024-12-16 02:50:32.934243] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:02.393 [2024-12-16 02:50:32.934252] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:02.393 [2024-12-16 02:50:32.934307] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:02.393 [2024-12-16 02:50:32.934570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-12-16 02:50:32.934588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17bfc50 with addr=10.0.0.2, port=4420 00:29:02.393 [2024-12-16 02:50:32.934596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bfc50 is same with the state(6) to be set 00:29:02.393 [2024-12-16 02:50:32.934804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-12-16 02:50:32.934817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1362cd0 with addr=10.0.0.2, port=4420 00:29:02.393 [2024-12-16 02:50:32.934824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1362cd0 is same with the state(6) to be set 00:29:02.393 [2024-12-16 02:50:32.934833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dcde0 (9): Bad file descriptor 00:29:02.393 [2024-12-16 02:50:32.934842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c5430 (9): Bad file descriptor 00:29:02.393 [2024-12-16 02:50:32.934857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126e610 (9): Bad file descriptor 00:29:02.393 [2024-12-16 02:50:32.934866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1363140 (9): Bad file descriptor 00:29:02.393 [2024-12-16 02:50:32.934874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:02.393 [2024-12-16 02:50:32.934881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:02.393 [2024-12-16 02:50:32.934889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:02.393 [2024-12-16 02:50:32.934897] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:02.393 [2024-12-16 02:50:32.934905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:02.393 [2024-12-16 02:50:32.934911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:02.393 [2024-12-16 02:50:32.934918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:02.393 [2024-12-16 02:50:32.934925] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:02.393 [2024-12-16 02:50:32.934933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:02.393 [2024-12-16 02:50:32.934939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:02.393 [2024-12-16 02:50:32.934947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:02.393 [2024-12-16 02:50:32.934953] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:02.393 [2024-12-16 02:50:32.935176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:02.393 [2024-12-16 02:50:32.935190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17db160 with addr=10.0.0.2, port=4420 00:29:02.393 [2024-12-16 02:50:32.935198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17db160 is same with the state(6) to be set 00:29:02.393 [2024-12-16 02:50:32.935206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bfc50 (9): Bad file descriptor 00:29:02.393 [2024-12-16 02:50:32.935215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1362cd0 (9): Bad file descriptor 00:29:02.393 [2024-12-16 02:50:32.935224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:02.393 [2024-12-16 02:50:32.935231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:02.393 [2024-12-16 02:50:32.935238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:02.393 [2024-12-16 02:50:32.935248] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:02.393 [2024-12-16 02:50:32.935256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:02.393 [2024-12-16 02:50:32.935262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:02.393 [2024-12-16 02:50:32.935268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:02.393 [2024-12-16 02:50:32.935275] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:02.393 [2024-12-16 02:50:32.935283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:02.393 [2024-12-16 02:50:32.935290] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:02.393 [2024-12-16 02:50:32.935297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:02.393 [2024-12-16 02:50:32.935302] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:02.393 [2024-12-16 02:50:32.935310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:02.393 [2024-12-16 02:50:32.935315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:02.393 [2024-12-16 02:50:32.935321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:02.393 [2024-12-16 02:50:32.935329] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:02.393 [2024-12-16 02:50:32.935354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17db160 (9): Bad file descriptor 00:29:02.393 [2024-12-16 02:50:32.935363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:02.393 [2024-12-16 02:50:32.935369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:02.393 [2024-12-16 02:50:32.935375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:02.393 [2024-12-16 02:50:32.935381] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:02.393 [2024-12-16 02:50:32.935389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:02.393 [2024-12-16 02:50:32.935395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:02.393 [2024-12-16 02:50:32.935402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:02.393 [2024-12-16 02:50:32.935409] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:02.393 [2024-12-16 02:50:32.935433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:02.393 [2024-12-16 02:50:32.935440] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:02.393 [2024-12-16 02:50:32.935446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:02.393 [2024-12-16 02:50:32.935453] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:02.653 02:50:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1096319 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1096319 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1096319 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:04.029 rmmod nvme_tcp 00:29:04.029 rmmod nvme_fabrics 00:29:04.029 rmmod nvme_keyring 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1096052 ']' 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1096052 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1096052 ']' 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1096052 00:29:04.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1096052) - No such process 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1096052 is not found' 00:29:04.029 Process with pid 1096052 is not found 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:04.029 02:50:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:05.935 00:29:05.935 real 0m7.623s 00:29:05.935 user 0m18.190s 00:29:05.935 sys 0m1.364s 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:05.935 ************************************ 00:29:05.935 END TEST nvmf_shutdown_tc3 00:29:05.935 ************************************ 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:05.935 ************************************ 00:29:05.935 START TEST nvmf_shutdown_tc4 00:29:05.935 ************************************ 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:05.935 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:05.935 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.935 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:05.936 Found net devices under 0000:af:00.0: cvl_0_0 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:05.936 Found net devices under 0000:af:00.1: cvl_0_1 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:05.936 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:06.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:06.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:29:06.195 00:29:06.195 --- 10.0.0.2 ping statistics --- 00:29:06.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.195 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:06.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:06.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:29:06.195 00:29:06.195 --- 10.0.0.1 ping statistics --- 00:29:06.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:06.195 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1097557 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1097557 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1097557 ']' 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.195 02:50:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:06.455 [2024-12-16 02:50:36.876254] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:06.455 [2024-12-16 02:50:36.876302] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:06.455 [2024-12-16 02:50:36.937790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:06.455 [2024-12-16 02:50:36.960483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:06.455 [2024-12-16 02:50:36.960520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:06.455 [2024-12-16 02:50:36.960527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:06.455 [2024-12-16 02:50:36.960535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:06.455 [2024-12-16 02:50:36.960540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:06.455 [2024-12-16 02:50:36.961989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:06.455 [2024-12-16 02:50:36.962100] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:06.455 [2024-12-16 02:50:36.962206] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.455 [2024-12-16 02:50:36.962208] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:06.455 [2024-12-16 02:50:37.101844] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:06.455 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:06.714 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:06.714 Malloc1 00:29:06.714 [2024-12-16 02:50:37.224876] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:06.714 Malloc2 00:29:06.714 Malloc3 00:29:06.714 Malloc4 00:29:06.714 Malloc5 00:29:06.973 Malloc6 00:29:06.973 Malloc7 00:29:06.973 Malloc8 00:29:06.973 Malloc9 00:29:06.973 Malloc10 00:29:06.973 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:06.973 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:06.973 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:06.973 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.232 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1097617 00:29:07.232 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:07.232 02:50:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:07.232 [2024-12-16 02:50:37.730751] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:12.510 02:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:12.510 02:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1097557 00:29:12.510 02:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1097557 ']' 00:29:12.510 02:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1097557 00:29:12.510 02:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:12.510 02:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.510 02:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1097557 00:29:12.510 02:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:12.510 02:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:12.510 02:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1097557' 00:29:12.510 killing process with pid 1097557 00:29:12.510 02:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1097557 00:29:12.510 02:50:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1097557 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 [2024-12-16 02:50:42.729806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.510 starting I/O failed: -6 00:29:12.510 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 [2024-12-16 02:50:42.730649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 [2024-12-16 02:50:42.731496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7280 is same with the state(6) to be set 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 [2024-12-16 02:50:42.731532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7280 is same with the state(6) to be set 00:29:12.511 [2024-12-16 02:50:42.731540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7280 is same with the state(6) to be set 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 [2024-12-16 02:50:42.731547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7280 is same with the state(6) to be set 00:29:12.511 starting I/O failed: -6 00:29:12.511 [2024-12-16 02:50:42.731557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7280 is same with the state(6) to be set 00:29:12.511 [2024-12-16 02:50:42.731564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7280 is same with the state(6) to be set 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 [2024-12-16 02:50:42.731570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7280 is same with the state(6) to be set 00:29:12.511 starting I/O failed: -6 00:29:12.511 [2024-12-16 02:50:42.731577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7280 is same with the state(6) to be set 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 [2024-12-16 02:50:42.731744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 [2024-12-16 02:50:42.731870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7770 is same with the state(6) to be set 00:29:12.511 starting I/O failed: -6 00:29:12.511 [2024-12-16 02:50:42.731895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7770 is same with the state(6) to be set 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 [2024-12-16 02:50:42.731903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7770 is same with starting I/O failed: -6 00:29:12.511 the state(6) to be set 00:29:12.511 [2024-12-16 02:50:42.731912] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7770 is same with the state(6) to be set 00:29:12.511 [2024-12-16 02:50:42.731919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7770 is same with Write completed with error (sct=0, sc=8) 00:29:12.511 the state(6) to be set 00:29:12.511 starting I/O failed: -6 00:29:12.511 [2024-12-16 02:50:42.731927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7770 is same with the state(6) to be set 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 starting I/O failed: -6 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 [2024-12-16 02:50:42.732227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7c40 is same with starting I/O failed: -6 00:29:12.511 the state(6) to be set 00:29:12.511 [2024-12-16 02:50:42.732251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7c40 is same with the state(6) to be set 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 [2024-12-16 02:50:42.732259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7c40 is same with the state(6) to be set 00:29:12.511 starting I/O failed: -6 00:29:12.511 [2024-12-16 02:50:42.732266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7c40 is same with the state(6) to be set 00:29:12.511 [2024-12-16 02:50:42.732273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7c40 is same with the state(6) to be set 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 [2024-12-16 02:50:42.732279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7c40 is same with the state(6) to be set 00:29:12.511 starting I/O failed: -6 00:29:12.511 [2024-12-16 02:50:42.732286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7c40 is same with the state(6) to be set 00:29:12.511 [2024-12-16 02:50:42.732293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7c40 is same with the state(6) to be set 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.511 [2024-12-16 02:50:42.732299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7c40 is same with the state(6) to be set 00:29:12.511 starting I/O failed: -6 00:29:12.511 [2024-12-16 02:50:42.732305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed7c40 is same with the state(6) to be set 00:29:12.511 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 [2024-12-16 02:50:42.732600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6db0 is same with Write completed with error (sct=0, sc=8) 00:29:12.512 the state(6) to be set 00:29:12.512 starting I/O failed: -6 00:29:12.512 [2024-12-16 02:50:42.732625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6db0 is same with the state(6) to be set 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 [2024-12-16 02:50:42.732633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6db0 is same with the state(6) to be set 00:29:12.512 starting I/O failed: -6 00:29:12.512 [2024-12-16 02:50:42.732641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6db0 is same with the state(6) to be set 00:29:12.512 [2024-12-16 02:50:42.732648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6db0 is same with the state(6) to be set 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 [2024-12-16 02:50:42.732656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ed6db0 is same with the state(6) to be set 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 [2024-12-16 02:50:42.733312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.512 NVMe io qpair process completion error 00:29:12.512 [2024-12-16 02:50:42.734517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c5c70 is same with the state(6) to be set 00:29:12.512 [2024-12-16 02:50:42.734541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c5c70 is same with the state(6) to be set 00:29:12.512 [2024-12-16 02:50:42.734551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c5c70 is same with the state(6) to be set 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 [2024-12-16 02:50:42.735809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.512 starting I/O failed: -6 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.512 starting I/O failed: -6 00:29:12.512 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 [2024-12-16 02:50:42.737624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.513 NVMe io qpair process completion error 00:29:12.513 [2024-12-16 02:50:42.741322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6fb0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.741346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6fb0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.741355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6fb0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.741362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6fb0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.741373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6fb0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.741380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6fb0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.741386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6fb0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.741392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6fb0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.741924] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7480 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.741948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7480 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.742274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7950 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.742298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7950 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.742306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7950 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.742314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7950 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.742320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7950 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.742327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7950 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.742755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6ae0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.742780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6ae0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.742789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6ae0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.742797] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6ae0 is same with the state(6) to be set 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 [2024-12-16 02:50:42.742803] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6ae0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.742811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c6ae0 is same with the state(6) to be set 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 starting I/O failed: -6 00:29:12.513 Write completed with error (sct=0, sc=8) 00:29:12.513 [2024-12-16 02:50:42.743413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c82f0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.743416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.513 [2024-12-16 02:50:42.743434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c82f0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.743442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c82f0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.743449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c82f0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.743490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c87c0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.743509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c87c0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.743516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c87c0 is same with starting I/O failed: -6 00:29:12.513 the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.743524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c87c0 is same with the state(6) to be set 00:29:12.513 [2024-12-16 02:50:42.743531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c87c0 is same with the state(6) to be set 00:29:12.514 [2024-12-16 02:50:42.743537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c87c0 is same with the state(6) to be set 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 [2024-12-16 02:50:42.744103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c8c90 is same with the state(6) to be set 00:29:12.514 starting I/O failed: -6 00:29:12.514 [2024-12-16 02:50:42.744124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c8c90 is same with the state(6) to be set 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 [2024-12-16 02:50:42.744132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c8c90 is same with the state(6) to be set 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 [2024-12-16 02:50:42.744147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c8c90 is same with starting I/O failed: -6 00:29:12.514 the state(6) to be set 00:29:12.514 [2024-12-16 02:50:42.744158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c8c90 is same with the state(6) to be set 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 [2024-12-16 02:50:42.744166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c8c90 is same with the state(6) to be set 00:29:12.514 [2024-12-16 02:50:42.744173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c8c90 is same with the state(6) to be set 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 [2024-12-16 02:50:42.744180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c8c90 is same with the state(6) to be set 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 [2024-12-16 02:50:42.744308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 [2024-12-16 02:50:42.744804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7e20 is same with the state(6) to be set 00:29:12.514 [2024-12-16 02:50:42.744819] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7e20 is same with Write completed with error (sct=0, sc=8) 00:29:12.514 the state(6) to be set 00:29:12.514 starting I/O failed: -6 00:29:12.514 [2024-12-16 02:50:42.744828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7e20 is same with the state(6) to be set 00:29:12.514 [2024-12-16 02:50:42.744838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7e20 is same with the state(6) to be set 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 [2024-12-16 02:50:42.744844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7e20 is same with the state(6) to be set 00:29:12.514 [2024-12-16 02:50:42.744856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7e20 is same with the state(6) to be set 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 [2024-12-16 02:50:42.744862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7e20 is same with the state(6) to be set 00:29:12.514 starting I/O failed: -6 00:29:12.514 [2024-12-16 02:50:42.744869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20c7e20 is same with the state(6) to be set 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 [2024-12-16 02:50:42.745295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.514 Write completed with error (sct=0, sc=8) 00:29:12.514 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 [2024-12-16 02:50:42.746815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.515 NVMe io qpair process completion error 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 [2024-12-16 02:50:42.747693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.515 starting I/O failed: -6 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.515 starting I/O failed: -6 00:29:12.515 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 [2024-12-16 02:50:42.748602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 [2024-12-16 02:50:42.749636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.516 starting I/O failed: -6 00:29:12.516 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 [2024-12-16 02:50:42.751693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.517 NVMe io qpair process completion error 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 [2024-12-16 02:50:42.752818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 [2024-12-16 02:50:42.753696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 Write completed with error (sct=0, sc=8) 00:29:12.517 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 [2024-12-16 02:50:42.754691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 [2024-12-16 02:50:42.760075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.518 NVMe io qpair process completion error 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 [2024-12-16 02:50:42.761039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 starting I/O failed: -6 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.518 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 [2024-12-16 02:50:42.761946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 [2024-12-16 02:50:42.762949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.519 starting I/O failed: -6 00:29:12.519 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 [2024-12-16 02:50:42.767511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.520 NVMe io qpair process completion error 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 [2024-12-16 02:50:42.768518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 Write completed with error (sct=0, sc=8) 00:29:12.520 starting I/O failed: -6 00:29:12.521 [2024-12-16 02:50:42.769455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 [2024-12-16 02:50:42.770427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 [2024-12-16 02:50:42.772172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.521 NVMe io qpair process completion error 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.521 starting I/O failed: -6 00:29:12.521 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 [2024-12-16 02:50:42.773171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 [2024-12-16 02:50:42.774037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 [2024-12-16 02:50:42.775028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.522 Write completed with error (sct=0, sc=8) 00:29:12.522 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 [2024-12-16 02:50:42.779981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.523 NVMe io qpair process completion error 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 Write completed with error (sct=0, sc=8) 00:29:12.523 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 [2024-12-16 02:50:42.785108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 [2024-12-16 02:50:42.786237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.524 starting I/O failed: -6 00:29:12.524 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 [2024-12-16 02:50:42.789886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:12.525 NVMe io qpair process completion error 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.525 starting I/O failed: -6 00:29:12.525 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 [2024-12-16 02:50:42.792390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 Write completed with error (sct=0, sc=8) 00:29:12.526 starting I/O failed: -6 00:29:12.526 [2024-12-16 02:50:42.794199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:12.526 NVMe io qpair process completion error 00:29:12.526 Initializing NVMe Controllers 00:29:12.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:12.526 Controller IO queue size 128, less than required. 00:29:12.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:12.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:12.526 Controller IO queue size 128, less than required. 00:29:12.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:12.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:12.526 Controller IO queue size 128, less than required. 00:29:12.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:12.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:12.526 Controller IO queue size 128, less than required. 00:29:12.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:12.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:12.526 Controller IO queue size 128, less than required. 00:29:12.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:12.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:12.526 Controller IO queue size 128, less than required. 00:29:12.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:12.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:12.526 Controller IO queue size 128, less than required. 00:29:12.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:12.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:12.526 Controller IO queue size 128, less than required. 00:29:12.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:12.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:12.526 Controller IO queue size 128, less than required. 00:29:12.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:12.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:12.526 Controller IO queue size 128, less than required. 00:29:12.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:12.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:12.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:12.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:12.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:12.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:12.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:12.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:12.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:12.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:12.527 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:12.527 Initialization complete. Launching workers. 00:29:12.527 ======================================================== 00:29:12.527 Latency(us) 00:29:12.527 Device Information : IOPS MiB/s Average min max 00:29:12.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2211.20 95.01 57891.77 728.84 106550.07 00:29:12.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2209.94 94.96 57934.21 669.73 104852.21 00:29:12.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2229.06 95.78 57454.65 892.29 102659.35 00:29:12.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2236.41 96.10 57322.52 901.30 107935.28 00:29:12.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2218.76 95.34 57823.43 725.68 113094.48 00:29:12.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2205.52 94.77 58184.01 893.53 100498.80 00:29:12.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2191.44 94.16 57981.64 855.03 97276.10 00:29:12.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2212.67 95.08 57707.37 501.99 97121.74 00:29:12.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2158.87 92.76 59444.22 970.46 119777.50 00:29:12.527 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2238.10 96.17 57369.62 777.51 102041.19 00:29:12.527 ======================================================== 00:29:12.527 Total : 22111.97 950.12 57905.84 501.99 119777.50 00:29:12.527 00:29:12.527 [2024-12-16 02:50:42.797174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd8ffb0 is same with the state(6) to be set 00:29:12.527 [2024-12-16 02:50:42.797220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd90370 is same with the state(6) to be set 00:29:12.527 [2024-12-16 02:50:42.797252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd91ff0 is same with the state(6) to be set 00:29:12.527 [2024-12-16 02:50:42.797282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd90880 is same with the state(6) to be set 00:29:12.527 [2024-12-16 02:50:42.797311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd90190 is same with the state(6) to be set 00:29:12.527 [2024-12-16 02:50:42.797340] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd91cc0 is same with the state(6) to be set 00:29:12.527 [2024-12-16 02:50:42.797367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd95b30 is same with the state(6) to be set 00:29:12.527 [2024-12-16 02:50:42.797394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd90550 is same with the state(6) to be set 00:29:12.527 [2024-12-16 02:50:42.797421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92320 is same with the state(6) to be set 00:29:12.527 [2024-12-16 02:50:42.797448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd92650 is same with the state(6) to be set 00:29:12.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:12.527 02:50:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1097617 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1097617 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1097617 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:13.465 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:13.724 rmmod nvme_tcp 00:29:13.724 rmmod nvme_fabrics 00:29:13.724 rmmod nvme_keyring 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1097557 ']' 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1097557 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1097557 ']' 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1097557 00:29:13.724 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1097557) - No such process 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1097557 is not found' 00:29:13.724 Process with pid 1097557 is not found 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:13.724 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:13.725 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:13.725 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:13.725 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:13.725 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:13.725 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:13.725 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:13.725 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:13.725 02:50:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:15.628 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:15.628 00:29:15.628 real 0m9.779s 00:29:15.628 user 0m24.986s 00:29:15.628 sys 0m5.125s 00:29:15.628 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.628 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:15.628 ************************************ 00:29:15.628 END TEST nvmf_shutdown_tc4 00:29:15.628 ************************************ 00:29:15.887 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:15.887 00:29:15.887 real 0m40.493s 00:29:15.887 user 1m39.045s 00:29:15.887 sys 0m13.920s 00:29:15.887 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.887 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:15.887 ************************************ 00:29:15.887 END TEST nvmf_shutdown 00:29:15.887 ************************************ 00:29:15.887 02:50:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:15.887 02:50:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:15.887 02:50:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.887 02:50:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:15.887 ************************************ 00:29:15.887 START TEST nvmf_nsid 00:29:15.887 ************************************ 00:29:15.887 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:15.887 * Looking for test storage... 00:29:15.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:15.887 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:15.887 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:29:15.887 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:16.147 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:16.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.148 --rc genhtml_branch_coverage=1 00:29:16.148 --rc genhtml_function_coverage=1 00:29:16.148 --rc genhtml_legend=1 00:29:16.148 --rc geninfo_all_blocks=1 00:29:16.148 --rc geninfo_unexecuted_blocks=1 00:29:16.148 00:29:16.148 ' 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:16.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.148 --rc genhtml_branch_coverage=1 00:29:16.148 --rc genhtml_function_coverage=1 00:29:16.148 --rc genhtml_legend=1 00:29:16.148 --rc geninfo_all_blocks=1 00:29:16.148 --rc geninfo_unexecuted_blocks=1 00:29:16.148 00:29:16.148 ' 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:16.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.148 --rc genhtml_branch_coverage=1 00:29:16.148 --rc genhtml_function_coverage=1 00:29:16.148 --rc genhtml_legend=1 00:29:16.148 --rc geninfo_all_blocks=1 00:29:16.148 --rc geninfo_unexecuted_blocks=1 00:29:16.148 00:29:16.148 ' 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:16.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:16.148 --rc genhtml_branch_coverage=1 00:29:16.148 --rc genhtml_function_coverage=1 00:29:16.148 --rc genhtml_legend=1 00:29:16.148 --rc geninfo_all_blocks=1 00:29:16.148 --rc geninfo_unexecuted_blocks=1 00:29:16.148 00:29:16.148 ' 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:16.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:16.148 02:50:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:21.563 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:21.563 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:21.563 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:21.822 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.822 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.822 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.822 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.822 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.822 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.822 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.822 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:21.823 Found net devices under 0000:af:00.0: cvl_0_0 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:21.823 Found net devices under 0000:af:00.1: cvl_0_1 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:21.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:21.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:29:21.823 00:29:21.823 --- 10.0.0.2 ping statistics --- 00:29:21.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.823 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:21.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:21.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:29:21.823 00:29:21.823 --- 10.0.0.1 ping statistics --- 00:29:21.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:21.823 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:21.823 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:22.082 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:22.082 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:22.082 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:22.082 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:22.082 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1102138 00:29:22.082 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:22.082 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1102138 00:29:22.082 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1102138 ']' 00:29:22.082 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:22.082 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.082 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:22.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:22.082 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.082 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:22.082 [2024-12-16 02:50:52.560948] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:22.082 [2024-12-16 02:50:52.560995] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:22.082 [2024-12-16 02:50:52.639868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.082 [2024-12-16 02:50:52.660855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:22.082 [2024-12-16 02:50:52.660893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:22.082 [2024-12-16 02:50:52.660901] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:22.082 [2024-12-16 02:50:52.660907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:22.082 [2024-12-16 02:50:52.660912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:22.082 [2024-12-16 02:50:52.661405] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1102224 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=b7e45a6e-4c51-4162-accb-76038bd2a393 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=4a95b8b0-4da7-42e4-9a50-5c2806c74b91 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=5ed9fa78-68ff-42f3-8d89-d697b95f940e 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:22.342 null0 00:29:22.342 null1 00:29:22.342 [2024-12-16 02:50:52.845549] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:22.342 [2024-12-16 02:50:52.845591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1102224 ] 00:29:22.342 null2 00:29:22.342 [2024-12-16 02:50:52.851654] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.342 [2024-12-16 02:50:52.875839] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1102224 /var/tmp/tgt2.sock 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1102224 ']' 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:22.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:22.342 02:50:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:22.342 [2024-12-16 02:50:52.918768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.342 [2024-12-16 02:50:52.941086] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.601 02:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:22.601 02:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:22.601 02:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:22.859 [2024-12-16 02:50:53.448671] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:22.859 [2024-12-16 02:50:53.464758] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:22.859 nvme0n1 nvme0n2 00:29:22.859 nvme1n1 00:29:23.117 02:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:23.117 02:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:23.117 02:50:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:24.053 02:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:24.053 02:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:24.053 02:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:24.053 02:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:24.053 02:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:24.053 02:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:24.053 02:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:24.053 02:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:24.053 02:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:24.053 02:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:24.053 02:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:24.053 02:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:24.053 02:50:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:24.989 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:24.989 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:24.989 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:24.989 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:24.989 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:24.989 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid b7e45a6e-4c51-4162-accb-76038bd2a393 00:29:24.989 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:24.989 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:24.989 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:24.989 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:24.989 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=b7e45a6e4c514162accb76038bd2a393 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo B7E45A6E4C514162ACCB76038BD2A393 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ B7E45A6E4C514162ACCB76038BD2A393 == \B\7\E\4\5\A\6\E\4\C\5\1\4\1\6\2\A\C\C\B\7\6\0\3\8\B\D\2\A\3\9\3 ]] 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 4a95b8b0-4da7-42e4-9a50-5c2806c74b91 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4a95b8b04da742e49a505c2806c74b91 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4A95B8B04DA742E49A505C2806C74B91 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 4A95B8B04DA742E49A505C2806C74B91 == \4\A\9\5\B\8\B\0\4\D\A\7\4\2\E\4\9\A\5\0\5\C\2\8\0\6\C\7\4\B\9\1 ]] 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 5ed9fa78-68ff-42f3-8d89-d697b95f940e 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=5ed9fa7868ff42f38d89d697b95f940e 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 5ED9FA7868FF42F38D89D697B95F940E 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 5ED9FA7868FF42F38D89D697B95F940E == \5\E\D\9\F\A\7\8\6\8\F\F\4\2\F\3\8\D\8\9\D\6\9\7\B\9\5\F\9\4\0\E ]] 00:29:25.246 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:25.505 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:25.505 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:25.505 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1102224 00:29:25.505 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1102224 ']' 00:29:25.505 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1102224 00:29:25.505 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:25.505 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.505 02:50:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1102224 00:29:25.505 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:25.505 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:25.505 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1102224' 00:29:25.505 killing process with pid 1102224 00:29:25.505 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1102224 00:29:25.505 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1102224 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:25.764 rmmod nvme_tcp 00:29:25.764 rmmod nvme_fabrics 00:29:25.764 rmmod nvme_keyring 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1102138 ']' 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1102138 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1102138 ']' 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1102138 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:25.764 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1102138 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1102138' 00:29:26.024 killing process with pid 1102138 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1102138 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1102138 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.024 02:50:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.562 02:50:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:28.562 00:29:28.562 real 0m12.271s 00:29:28.562 user 0m9.463s 00:29:28.562 sys 0m5.501s 00:29:28.562 02:50:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.562 02:50:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:28.562 ************************************ 00:29:28.562 END TEST nvmf_nsid 00:29:28.562 ************************************ 00:29:28.562 02:50:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:28.562 00:29:28.562 real 18m34.093s 00:29:28.562 user 49m6.027s 00:29:28.562 sys 4m42.876s 00:29:28.562 02:50:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:28.562 02:50:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:28.562 ************************************ 00:29:28.562 END TEST nvmf_target_extra 00:29:28.562 ************************************ 00:29:28.562 02:50:58 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:28.562 02:50:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:28.562 02:50:58 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.562 02:50:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:28.562 ************************************ 00:29:28.562 START TEST nvmf_host 00:29:28.562 ************************************ 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:28.562 * Looking for test storage... 00:29:28.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:28.562 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:28.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.563 --rc genhtml_branch_coverage=1 00:29:28.563 --rc genhtml_function_coverage=1 00:29:28.563 --rc genhtml_legend=1 00:29:28.563 --rc geninfo_all_blocks=1 00:29:28.563 --rc geninfo_unexecuted_blocks=1 00:29:28.563 00:29:28.563 ' 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:28.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.563 --rc genhtml_branch_coverage=1 00:29:28.563 --rc genhtml_function_coverage=1 00:29:28.563 --rc genhtml_legend=1 00:29:28.563 --rc geninfo_all_blocks=1 00:29:28.563 --rc geninfo_unexecuted_blocks=1 00:29:28.563 00:29:28.563 ' 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:28.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.563 --rc genhtml_branch_coverage=1 00:29:28.563 --rc genhtml_function_coverage=1 00:29:28.563 --rc genhtml_legend=1 00:29:28.563 --rc geninfo_all_blocks=1 00:29:28.563 --rc geninfo_unexecuted_blocks=1 00:29:28.563 00:29:28.563 ' 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:28.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.563 --rc genhtml_branch_coverage=1 00:29:28.563 --rc genhtml_function_coverage=1 00:29:28.563 --rc genhtml_legend=1 00:29:28.563 --rc geninfo_all_blocks=1 00:29:28.563 --rc geninfo_unexecuted_blocks=1 00:29:28.563 00:29:28.563 ' 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.563 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:28.563 02:50:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:28.563 ************************************ 00:29:28.563 START TEST nvmf_multicontroller 00:29:28.563 ************************************ 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:28.563 * Looking for test storage... 00:29:28.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:28.563 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:28.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.564 --rc genhtml_branch_coverage=1 00:29:28.564 --rc genhtml_function_coverage=1 00:29:28.564 --rc genhtml_legend=1 00:29:28.564 --rc geninfo_all_blocks=1 00:29:28.564 --rc geninfo_unexecuted_blocks=1 00:29:28.564 00:29:28.564 ' 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:28.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.564 --rc genhtml_branch_coverage=1 00:29:28.564 --rc genhtml_function_coverage=1 00:29:28.564 --rc genhtml_legend=1 00:29:28.564 --rc geninfo_all_blocks=1 00:29:28.564 --rc geninfo_unexecuted_blocks=1 00:29:28.564 00:29:28.564 ' 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:28.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.564 --rc genhtml_branch_coverage=1 00:29:28.564 --rc genhtml_function_coverage=1 00:29:28.564 --rc genhtml_legend=1 00:29:28.564 --rc geninfo_all_blocks=1 00:29:28.564 --rc geninfo_unexecuted_blocks=1 00:29:28.564 00:29:28.564 ' 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:28.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:28.564 --rc genhtml_branch_coverage=1 00:29:28.564 --rc genhtml_function_coverage=1 00:29:28.564 --rc genhtml_legend=1 00:29:28.564 --rc geninfo_all_blocks=1 00:29:28.564 --rc geninfo_unexecuted_blocks=1 00:29:28.564 00:29:28.564 ' 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:28.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:28.564 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:28.824 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:28.824 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:28.824 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:28.824 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:28.824 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:28.824 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:28.824 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:28.824 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:28.824 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:28.824 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:28.824 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:28.824 02:50:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:35.396 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:35.396 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.396 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:35.397 Found net devices under 0000:af:00.0: cvl_0_0 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:35.397 Found net devices under 0000:af:00.1: cvl_0_1 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:35.397 02:51:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:35.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:35.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:29:35.397 00:29:35.397 --- 10.0.0.2 ping statistics --- 00:29:35.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.397 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:35.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:35.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:29:35.397 00:29:35.397 --- 10.0.0.1 ping statistics --- 00:29:35.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:35.397 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1106416 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1106416 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1106416 ']' 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.397 [2024-12-16 02:51:05.221460] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:35.397 [2024-12-16 02:51:05.221501] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.397 [2024-12-16 02:51:05.299041] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:35.397 [2024-12-16 02:51:05.321393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.397 [2024-12-16 02:51:05.321431] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.397 [2024-12-16 02:51:05.321438] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.397 [2024-12-16 02:51:05.321444] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.397 [2024-12-16 02:51:05.321450] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.397 [2024-12-16 02:51:05.322674] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:35.397 [2024-12-16 02:51:05.322782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.397 [2024-12-16 02:51:05.322784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.397 [2024-12-16 02:51:05.453233] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.397 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.397 Malloc0 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.398 [2024-12-16 02:51:05.516456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.398 [2024-12-16 02:51:05.528411] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.398 Malloc1 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1106614 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1106614 /var/tmp/bdevperf.sock 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1106614 ']' 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:35.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.398 02:51:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.398 NVMe0n1 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.398 1 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.398 request: 00:29:35.398 { 00:29:35.398 "name": "NVMe0", 00:29:35.398 "trtype": "tcp", 00:29:35.398 "traddr": "10.0.0.2", 00:29:35.398 "adrfam": "ipv4", 00:29:35.398 "trsvcid": "4420", 00:29:35.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.398 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:35.398 "hostaddr": "10.0.0.1", 00:29:35.398 "prchk_reftag": false, 00:29:35.398 "prchk_guard": false, 00:29:35.398 "hdgst": false, 00:29:35.398 "ddgst": false, 00:29:35.398 "allow_unrecognized_csi": false, 00:29:35.398 "method": "bdev_nvme_attach_controller", 00:29:35.398 "req_id": 1 00:29:35.398 } 00:29:35.398 Got JSON-RPC error response 00:29:35.398 response: 00:29:35.398 { 00:29:35.398 "code": -114, 00:29:35.398 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:35.398 } 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:35.398 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.399 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:35.399 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.399 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.658 request: 00:29:35.658 { 00:29:35.658 "name": "NVMe0", 00:29:35.658 "trtype": "tcp", 00:29:35.658 "traddr": "10.0.0.2", 00:29:35.658 "adrfam": "ipv4", 00:29:35.658 "trsvcid": "4420", 00:29:35.658 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:35.658 "hostaddr": "10.0.0.1", 00:29:35.658 "prchk_reftag": false, 00:29:35.658 "prchk_guard": false, 00:29:35.658 "hdgst": false, 00:29:35.658 "ddgst": false, 00:29:35.658 "allow_unrecognized_csi": false, 00:29:35.658 "method": "bdev_nvme_attach_controller", 00:29:35.658 "req_id": 1 00:29:35.658 } 00:29:35.658 Got JSON-RPC error response 00:29:35.658 response: 00:29:35.658 { 00:29:35.658 "code": -114, 00:29:35.658 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:35.658 } 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.658 request: 00:29:35.658 { 00:29:35.658 "name": "NVMe0", 00:29:35.658 "trtype": "tcp", 00:29:35.658 "traddr": "10.0.0.2", 00:29:35.658 "adrfam": "ipv4", 00:29:35.658 "trsvcid": "4420", 00:29:35.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.658 "hostaddr": "10.0.0.1", 00:29:35.658 "prchk_reftag": false, 00:29:35.658 "prchk_guard": false, 00:29:35.658 "hdgst": false, 00:29:35.658 "ddgst": false, 00:29:35.658 "multipath": "disable", 00:29:35.658 "allow_unrecognized_csi": false, 00:29:35.658 "method": "bdev_nvme_attach_controller", 00:29:35.658 "req_id": 1 00:29:35.658 } 00:29:35.658 Got JSON-RPC error response 00:29:35.658 response: 00:29:35.658 { 00:29:35.658 "code": -114, 00:29:35.658 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:35.658 } 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.658 request: 00:29:35.658 { 00:29:35.658 "name": "NVMe0", 00:29:35.658 "trtype": "tcp", 00:29:35.658 "traddr": "10.0.0.2", 00:29:35.658 "adrfam": "ipv4", 00:29:35.658 "trsvcid": "4420", 00:29:35.658 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:35.658 "hostaddr": "10.0.0.1", 00:29:35.658 "prchk_reftag": false, 00:29:35.658 "prchk_guard": false, 00:29:35.658 "hdgst": false, 00:29:35.658 "ddgst": false, 00:29:35.658 "multipath": "failover", 00:29:35.658 "allow_unrecognized_csi": false, 00:29:35.658 "method": "bdev_nvme_attach_controller", 00:29:35.658 "req_id": 1 00:29:35.658 } 00:29:35.658 Got JSON-RPC error response 00:29:35.658 response: 00:29:35.658 { 00:29:35.658 "code": -114, 00:29:35.658 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:35.658 } 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.658 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.917 NVMe0n1 00:29:35.917 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.917 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:35.917 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.917 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:35.917 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.917 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:35.917 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.917 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.176 00:29:36.176 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.176 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:36.176 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:36.176 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.176 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:36.176 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.176 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:36.176 02:51:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:37.112 { 00:29:37.112 "results": [ 00:29:37.112 { 00:29:37.112 "job": "NVMe0n1", 00:29:37.112 "core_mask": "0x1", 00:29:37.112 "workload": "write", 00:29:37.112 "status": "finished", 00:29:37.112 "queue_depth": 128, 00:29:37.112 "io_size": 4096, 00:29:37.112 "runtime": 1.00475, 00:29:37.112 "iops": 25227.170938044288, 00:29:37.112 "mibps": 98.5436364767355, 00:29:37.112 "io_failed": 0, 00:29:37.112 "io_timeout": 0, 00:29:37.112 "avg_latency_us": 5067.511821554914, 00:29:37.112 "min_latency_us": 2980.327619047619, 00:29:37.112 "max_latency_us": 11921.310476190476 00:29:37.112 } 00:29:37.112 ], 00:29:37.112 "core_count": 1 00:29:37.112 } 00:29:37.112 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:37.112 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.112 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.112 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.112 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:37.112 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1106614 00:29:37.112 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1106614 ']' 00:29:37.112 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1106614 00:29:37.112 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:37.112 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.112 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1106614 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1106614' 00:29:37.371 killing process with pid 1106614 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1106614 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1106614 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:37.371 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:37.371 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:37.371 [2024-12-16 02:51:05.616676] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:37.372 [2024-12-16 02:51:05.616728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1106614 ] 00:29:37.372 [2024-12-16 02:51:05.693725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.372 [2024-12-16 02:51:05.715968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:37.372 [2024-12-16 02:51:06.573879] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name de121b7f-c38f-4066-8766-d8af36760472 already exists 00:29:37.372 [2024-12-16 02:51:06.573908] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:de121b7f-c38f-4066-8766-d8af36760472 alias for bdev NVMe1n1 00:29:37.372 [2024-12-16 02:51:06.573916] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:37.372 Running I/O for 1 seconds... 00:29:37.372 25219.00 IOPS, 98.51 MiB/s 00:29:37.372 Latency(us) 00:29:37.372 [2024-12-16T01:51:08.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.372 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:37.372 NVMe0n1 : 1.00 25227.17 98.54 0.00 0.00 5067.51 2980.33 11921.31 00:29:37.372 [2024-12-16T01:51:08.031Z] =================================================================================================================== 00:29:37.372 [2024-12-16T01:51:08.031Z] Total : 25227.17 98.54 0.00 0.00 5067.51 2980.33 11921.31 00:29:37.372 Received shutdown signal, test time was about 1.000000 seconds 00:29:37.372 00:29:37.372 Latency(us) 00:29:37.372 [2024-12-16T01:51:08.031Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.372 [2024-12-16T01:51:08.031Z] =================================================================================================================== 00:29:37.372 [2024-12-16T01:51:08.031Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.372 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:37.372 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:37.372 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:37.372 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:37.372 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:37.372 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:37.372 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:37.372 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:37.372 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:37.372 02:51:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:37.372 rmmod nvme_tcp 00:29:37.372 rmmod nvme_fabrics 00:29:37.631 rmmod nvme_keyring 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1106416 ']' 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1106416 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1106416 ']' 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1106416 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1106416 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1106416' 00:29:37.631 killing process with pid 1106416 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1106416 00:29:37.631 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1106416 00:29:37.890 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:37.890 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:37.890 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:37.890 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:37.890 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:37.890 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:37.890 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:37.890 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:37.890 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:37.890 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.890 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.890 02:51:08 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.796 02:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:39.796 00:29:39.796 real 0m11.373s 00:29:39.796 user 0m13.108s 00:29:39.796 sys 0m5.211s 00:29:39.796 02:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:39.796 02:51:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:39.796 ************************************ 00:29:39.796 END TEST nvmf_multicontroller 00:29:39.796 ************************************ 00:29:39.796 02:51:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:39.796 02:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:39.796 02:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:39.796 02:51:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.056 ************************************ 00:29:40.056 START TEST nvmf_aer 00:29:40.056 ************************************ 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:40.056 * Looking for test storage... 00:29:40.056 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:40.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.056 --rc genhtml_branch_coverage=1 00:29:40.056 --rc genhtml_function_coverage=1 00:29:40.056 --rc genhtml_legend=1 00:29:40.056 --rc geninfo_all_blocks=1 00:29:40.056 --rc geninfo_unexecuted_blocks=1 00:29:40.056 00:29:40.056 ' 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:40.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.056 --rc genhtml_branch_coverage=1 00:29:40.056 --rc genhtml_function_coverage=1 00:29:40.056 --rc genhtml_legend=1 00:29:40.056 --rc geninfo_all_blocks=1 00:29:40.056 --rc geninfo_unexecuted_blocks=1 00:29:40.056 00:29:40.056 ' 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:40.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.056 --rc genhtml_branch_coverage=1 00:29:40.056 --rc genhtml_function_coverage=1 00:29:40.056 --rc genhtml_legend=1 00:29:40.056 --rc geninfo_all_blocks=1 00:29:40.056 --rc geninfo_unexecuted_blocks=1 00:29:40.056 00:29:40.056 ' 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:40.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.056 --rc genhtml_branch_coverage=1 00:29:40.056 --rc genhtml_function_coverage=1 00:29:40.056 --rc genhtml_legend=1 00:29:40.056 --rc geninfo_all_blocks=1 00:29:40.056 --rc geninfo_unexecuted_blocks=1 00:29:40.056 00:29:40.056 ' 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.056 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:40.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:40.057 02:51:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:46.630 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:46.630 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:46.630 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:46.631 Found net devices under 0000:af:00.0: cvl_0_0 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:46.631 Found net devices under 0000:af:00.1: cvl_0_1 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:46.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:46.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:29:46.631 00:29:46.631 --- 10.0.0.2 ping statistics --- 00:29:46.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.631 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:46.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:46.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:29:46.631 00:29:46.631 --- 10.0.0.1 ping statistics --- 00:29:46.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:46.631 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1110733 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1110733 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1110733 ']' 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:46.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.631 [2024-12-16 02:51:16.586360] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:46.631 [2024-12-16 02:51:16.586407] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:46.631 [2024-12-16 02:51:16.664503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:46.631 [2024-12-16 02:51:16.687910] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:46.631 [2024-12-16 02:51:16.687948] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:46.631 [2024-12-16 02:51:16.687955] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:46.631 [2024-12-16 02:51:16.687961] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:46.631 [2024-12-16 02:51:16.687966] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:46.631 [2024-12-16 02:51:16.691870] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:46.631 [2024-12-16 02:51:16.691909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:46.631 [2024-12-16 02:51:16.692016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:46.631 [2024-12-16 02:51:16.692017] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.631 [2024-12-16 02:51:16.824447] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.631 Malloc0 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.631 [2024-12-16 02:51:16.883263] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.631 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.631 [ 00:29:46.631 { 00:29:46.631 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:46.631 "subtype": "Discovery", 00:29:46.631 "listen_addresses": [], 00:29:46.631 "allow_any_host": true, 00:29:46.632 "hosts": [] 00:29:46.632 }, 00:29:46.632 { 00:29:46.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:46.632 "subtype": "NVMe", 00:29:46.632 "listen_addresses": [ 00:29:46.632 { 00:29:46.632 "trtype": "TCP", 00:29:46.632 "adrfam": "IPv4", 00:29:46.632 "traddr": "10.0.0.2", 00:29:46.632 "trsvcid": "4420" 00:29:46.632 } 00:29:46.632 ], 00:29:46.632 "allow_any_host": true, 00:29:46.632 "hosts": [], 00:29:46.632 "serial_number": "SPDK00000000000001", 00:29:46.632 "model_number": "SPDK bdev Controller", 00:29:46.632 "max_namespaces": 2, 00:29:46.632 "min_cntlid": 1, 00:29:46.632 "max_cntlid": 65519, 00:29:46.632 "namespaces": [ 00:29:46.632 { 00:29:46.632 "nsid": 1, 00:29:46.632 "bdev_name": "Malloc0", 00:29:46.632 "name": "Malloc0", 00:29:46.632 "nguid": "B3D90D1EC1DE4E03BD3F8B3049607A2F", 00:29:46.632 "uuid": "b3d90d1e-c1de-4e03-bd3f-8b3049607a2f" 00:29:46.632 } 00:29:46.632 ] 00:29:46.632 } 00:29:46.632 ] 00:29:46.632 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.632 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:46.632 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:46.632 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1110954 00:29:46.632 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:46.632 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:46.632 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:46.632 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:46.632 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:46.632 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:46.632 02:51:16 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.632 Malloc1 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.632 [ 00:29:46.632 { 00:29:46.632 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:46.632 "subtype": "Discovery", 00:29:46.632 "listen_addresses": [], 00:29:46.632 "allow_any_host": true, 00:29:46.632 "hosts": [] 00:29:46.632 }, 00:29:46.632 { 00:29:46.632 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:46.632 "subtype": "NVMe", 00:29:46.632 "listen_addresses": [ 00:29:46.632 { 00:29:46.632 "trtype": "TCP", 00:29:46.632 "adrfam": "IPv4", 00:29:46.632 "traddr": "10.0.0.2", 00:29:46.632 "trsvcid": "4420" 00:29:46.632 } 00:29:46.632 ], 00:29:46.632 "allow_any_host": true, 00:29:46.632 "hosts": [], 00:29:46.632 "serial_number": "SPDK00000000000001", 00:29:46.632 "model_number": "SPDK bdev Controller", 00:29:46.632 "max_namespaces": 2, 00:29:46.632 "min_cntlid": 1, 00:29:46.632 "max_cntlid": 65519, 00:29:46.632 "namespaces": [ 00:29:46.632 { 00:29:46.632 "nsid": 1, 00:29:46.632 "bdev_name": "Malloc0", 00:29:46.632 "name": "Malloc0", 00:29:46.632 "nguid": "B3D90D1EC1DE4E03BD3F8B3049607A2F", 00:29:46.632 "uuid": "b3d90d1e-c1de-4e03-bd3f-8b3049607a2f" 00:29:46.632 }, 00:29:46.632 { 00:29:46.632 "nsid": 2, 00:29:46.632 "bdev_name": "Malloc1", 00:29:46.632 "name": "Malloc1", 00:29:46.632 Asynchronous Event Request test 00:29:46.632 Attaching to 10.0.0.2 00:29:46.632 Attached to 10.0.0.2 00:29:46.632 Registering asynchronous event callbacks... 00:29:46.632 Starting namespace attribute notice tests for all controllers... 00:29:46.632 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:46.632 aer_cb - Changed Namespace 00:29:46.632 Cleaning up... 00:29:46.632 "nguid": "EFC1FAD0EE64461891AAF708904DB8C8", 00:29:46.632 "uuid": "efc1fad0-ee64-4618-91aa-f708904db8c8" 00:29:46.632 } 00:29:46.632 ] 00:29:46.632 } 00:29:46.632 ] 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1110954 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:46.632 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:46.632 rmmod nvme_tcp 00:29:46.632 rmmod nvme_fabrics 00:29:46.892 rmmod nvme_keyring 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1110733 ']' 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1110733 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1110733 ']' 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1110733 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1110733 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1110733' 00:29:46.892 killing process with pid 1110733 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1110733 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1110733 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:46.892 02:51:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:49.428 00:29:49.428 real 0m9.145s 00:29:49.428 user 0m5.116s 00:29:49.428 sys 0m4.815s 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:49.428 ************************************ 00:29:49.428 END TEST nvmf_aer 00:29:49.428 ************************************ 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:49.428 ************************************ 00:29:49.428 START TEST nvmf_async_init 00:29:49.428 ************************************ 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:49.428 * Looking for test storage... 00:29:49.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:49.428 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:49.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.429 --rc genhtml_branch_coverage=1 00:29:49.429 --rc genhtml_function_coverage=1 00:29:49.429 --rc genhtml_legend=1 00:29:49.429 --rc geninfo_all_blocks=1 00:29:49.429 --rc geninfo_unexecuted_blocks=1 00:29:49.429 00:29:49.429 ' 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:49.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.429 --rc genhtml_branch_coverage=1 00:29:49.429 --rc genhtml_function_coverage=1 00:29:49.429 --rc genhtml_legend=1 00:29:49.429 --rc geninfo_all_blocks=1 00:29:49.429 --rc geninfo_unexecuted_blocks=1 00:29:49.429 00:29:49.429 ' 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:49.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.429 --rc genhtml_branch_coverage=1 00:29:49.429 --rc genhtml_function_coverage=1 00:29:49.429 --rc genhtml_legend=1 00:29:49.429 --rc geninfo_all_blocks=1 00:29:49.429 --rc geninfo_unexecuted_blocks=1 00:29:49.429 00:29:49.429 ' 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:49.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:49.429 --rc genhtml_branch_coverage=1 00:29:49.429 --rc genhtml_function_coverage=1 00:29:49.429 --rc genhtml_legend=1 00:29:49.429 --rc geninfo_all_blocks=1 00:29:49.429 --rc geninfo_unexecuted_blocks=1 00:29:49.429 00:29:49.429 ' 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:49.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=100da86369ec4594a916c3eba3ec6b00 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:49.429 02:51:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:56.002 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:56.002 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:56.002 Found net devices under 0000:af:00.0: cvl_0_0 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:56.002 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:56.002 Found net devices under 0000:af:00.1: cvl_0_1 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:56.003 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:56.003 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:29:56.003 00:29:56.003 --- 10.0.0.2 ping statistics --- 00:29:56.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.003 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:56.003 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:56.003 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:29:56.003 00:29:56.003 --- 10.0.0.1 ping statistics --- 00:29:56.003 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:56.003 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1114415 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1114415 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1114415 ']' 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:56.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:56.003 02:51:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.003 [2024-12-16 02:51:25.820273] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:56.003 [2024-12-16 02:51:25.820320] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:56.003 [2024-12-16 02:51:25.900867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.003 [2024-12-16 02:51:25.922631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:56.003 [2024-12-16 02:51:25.922670] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:56.003 [2024-12-16 02:51:25.922678] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:56.003 [2024-12-16 02:51:25.922684] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:56.003 [2024-12-16 02:51:25.922688] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:56.003 [2024-12-16 02:51:25.923158] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.003 [2024-12-16 02:51:26.061928] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.003 null0 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 100da86369ec4594a916c3eba3ec6b00 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.003 [2024-12-16 02:51:26.106374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.003 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.004 nvme0n1 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.004 [ 00:29:56.004 { 00:29:56.004 "name": "nvme0n1", 00:29:56.004 "aliases": [ 00:29:56.004 "100da863-69ec-4594-a916-c3eba3ec6b00" 00:29:56.004 ], 00:29:56.004 "product_name": "NVMe disk", 00:29:56.004 "block_size": 512, 00:29:56.004 "num_blocks": 2097152, 00:29:56.004 "uuid": "100da863-69ec-4594-a916-c3eba3ec6b00", 00:29:56.004 "numa_id": 1, 00:29:56.004 "assigned_rate_limits": { 00:29:56.004 "rw_ios_per_sec": 0, 00:29:56.004 "rw_mbytes_per_sec": 0, 00:29:56.004 "r_mbytes_per_sec": 0, 00:29:56.004 "w_mbytes_per_sec": 0 00:29:56.004 }, 00:29:56.004 "claimed": false, 00:29:56.004 "zoned": false, 00:29:56.004 "supported_io_types": { 00:29:56.004 "read": true, 00:29:56.004 "write": true, 00:29:56.004 "unmap": false, 00:29:56.004 "flush": true, 00:29:56.004 "reset": true, 00:29:56.004 "nvme_admin": true, 00:29:56.004 "nvme_io": true, 00:29:56.004 "nvme_io_md": false, 00:29:56.004 "write_zeroes": true, 00:29:56.004 "zcopy": false, 00:29:56.004 "get_zone_info": false, 00:29:56.004 "zone_management": false, 00:29:56.004 "zone_append": false, 00:29:56.004 "compare": true, 00:29:56.004 "compare_and_write": true, 00:29:56.004 "abort": true, 00:29:56.004 "seek_hole": false, 00:29:56.004 "seek_data": false, 00:29:56.004 "copy": true, 00:29:56.004 "nvme_iov_md": false 00:29:56.004 }, 00:29:56.004 "memory_domains": [ 00:29:56.004 { 00:29:56.004 "dma_device_id": "system", 00:29:56.004 "dma_device_type": 1 00:29:56.004 } 00:29:56.004 ], 00:29:56.004 "driver_specific": { 00:29:56.004 "nvme": [ 00:29:56.004 { 00:29:56.004 "trid": { 00:29:56.004 "trtype": "TCP", 00:29:56.004 "adrfam": "IPv4", 00:29:56.004 "traddr": "10.0.0.2", 00:29:56.004 "trsvcid": "4420", 00:29:56.004 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:56.004 }, 00:29:56.004 "ctrlr_data": { 00:29:56.004 "cntlid": 1, 00:29:56.004 "vendor_id": "0x8086", 00:29:56.004 "model_number": "SPDK bdev Controller", 00:29:56.004 "serial_number": "00000000000000000000", 00:29:56.004 "firmware_revision": "25.01", 00:29:56.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:56.004 "oacs": { 00:29:56.004 "security": 0, 00:29:56.004 "format": 0, 00:29:56.004 "firmware": 0, 00:29:56.004 "ns_manage": 0 00:29:56.004 }, 00:29:56.004 "multi_ctrlr": true, 00:29:56.004 "ana_reporting": false 00:29:56.004 }, 00:29:56.004 "vs": { 00:29:56.004 "nvme_version": "1.3" 00:29:56.004 }, 00:29:56.004 "ns_data": { 00:29:56.004 "id": 1, 00:29:56.004 "can_share": true 00:29:56.004 } 00:29:56.004 } 00:29:56.004 ], 00:29:56.004 "mp_policy": "active_passive" 00:29:56.004 } 00:29:56.004 } 00:29:56.004 ] 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.004 [2024-12-16 02:51:26.367955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:56.004 [2024-12-16 02:51:26.368012] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x799a90 (9): Bad file descriptor 00:29:56.004 [2024-12-16 02:51:26.499927] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.004 [ 00:29:56.004 { 00:29:56.004 "name": "nvme0n1", 00:29:56.004 "aliases": [ 00:29:56.004 "100da863-69ec-4594-a916-c3eba3ec6b00" 00:29:56.004 ], 00:29:56.004 "product_name": "NVMe disk", 00:29:56.004 "block_size": 512, 00:29:56.004 "num_blocks": 2097152, 00:29:56.004 "uuid": "100da863-69ec-4594-a916-c3eba3ec6b00", 00:29:56.004 "numa_id": 1, 00:29:56.004 "assigned_rate_limits": { 00:29:56.004 "rw_ios_per_sec": 0, 00:29:56.004 "rw_mbytes_per_sec": 0, 00:29:56.004 "r_mbytes_per_sec": 0, 00:29:56.004 "w_mbytes_per_sec": 0 00:29:56.004 }, 00:29:56.004 "claimed": false, 00:29:56.004 "zoned": false, 00:29:56.004 "supported_io_types": { 00:29:56.004 "read": true, 00:29:56.004 "write": true, 00:29:56.004 "unmap": false, 00:29:56.004 "flush": true, 00:29:56.004 "reset": true, 00:29:56.004 "nvme_admin": true, 00:29:56.004 "nvme_io": true, 00:29:56.004 "nvme_io_md": false, 00:29:56.004 "write_zeroes": true, 00:29:56.004 "zcopy": false, 00:29:56.004 "get_zone_info": false, 00:29:56.004 "zone_management": false, 00:29:56.004 "zone_append": false, 00:29:56.004 "compare": true, 00:29:56.004 "compare_and_write": true, 00:29:56.004 "abort": true, 00:29:56.004 "seek_hole": false, 00:29:56.004 "seek_data": false, 00:29:56.004 "copy": true, 00:29:56.004 "nvme_iov_md": false 00:29:56.004 }, 00:29:56.004 "memory_domains": [ 00:29:56.004 { 00:29:56.004 "dma_device_id": "system", 00:29:56.004 "dma_device_type": 1 00:29:56.004 } 00:29:56.004 ], 00:29:56.004 "driver_specific": { 00:29:56.004 "nvme": [ 00:29:56.004 { 00:29:56.004 "trid": { 00:29:56.004 "trtype": "TCP", 00:29:56.004 "adrfam": "IPv4", 00:29:56.004 "traddr": "10.0.0.2", 00:29:56.004 "trsvcid": "4420", 00:29:56.004 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:56.004 }, 00:29:56.004 "ctrlr_data": { 00:29:56.004 "cntlid": 2, 00:29:56.004 "vendor_id": "0x8086", 00:29:56.004 "model_number": "SPDK bdev Controller", 00:29:56.004 "serial_number": "00000000000000000000", 00:29:56.004 "firmware_revision": "25.01", 00:29:56.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:56.004 "oacs": { 00:29:56.004 "security": 0, 00:29:56.004 "format": 0, 00:29:56.004 "firmware": 0, 00:29:56.004 "ns_manage": 0 00:29:56.004 }, 00:29:56.004 "multi_ctrlr": true, 00:29:56.004 "ana_reporting": false 00:29:56.004 }, 00:29:56.004 "vs": { 00:29:56.004 "nvme_version": "1.3" 00:29:56.004 }, 00:29:56.004 "ns_data": { 00:29:56.004 "id": 1, 00:29:56.004 "can_share": true 00:29:56.004 } 00:29:56.004 } 00:29:56.004 ], 00:29:56.004 "mp_policy": "active_passive" 00:29:56.004 } 00:29:56.004 } 00:29:56.004 ] 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UoJj8ChSRB 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UoJj8ChSRB 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.UoJj8ChSRB 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.004 [2024-12-16 02:51:26.572561] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:56.004 [2024-12-16 02:51:26.572662] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.004 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:56.005 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.005 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.005 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.005 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:56.005 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.005 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.005 [2024-12-16 02:51:26.588613] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:56.005 nvme0n1 00:29:56.005 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.005 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:56.005 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.005 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.263 [ 00:29:56.263 { 00:29:56.263 "name": "nvme0n1", 00:29:56.263 "aliases": [ 00:29:56.263 "100da863-69ec-4594-a916-c3eba3ec6b00" 00:29:56.263 ], 00:29:56.263 "product_name": "NVMe disk", 00:29:56.263 "block_size": 512, 00:29:56.263 "num_blocks": 2097152, 00:29:56.263 "uuid": "100da863-69ec-4594-a916-c3eba3ec6b00", 00:29:56.263 "numa_id": 1, 00:29:56.263 "assigned_rate_limits": { 00:29:56.263 "rw_ios_per_sec": 0, 00:29:56.263 "rw_mbytes_per_sec": 0, 00:29:56.263 "r_mbytes_per_sec": 0, 00:29:56.263 "w_mbytes_per_sec": 0 00:29:56.263 }, 00:29:56.263 "claimed": false, 00:29:56.263 "zoned": false, 00:29:56.263 "supported_io_types": { 00:29:56.263 "read": true, 00:29:56.263 "write": true, 00:29:56.263 "unmap": false, 00:29:56.263 "flush": true, 00:29:56.263 "reset": true, 00:29:56.263 "nvme_admin": true, 00:29:56.263 "nvme_io": true, 00:29:56.263 "nvme_io_md": false, 00:29:56.263 "write_zeroes": true, 00:29:56.263 "zcopy": false, 00:29:56.263 "get_zone_info": false, 00:29:56.263 "zone_management": false, 00:29:56.263 "zone_append": false, 00:29:56.263 "compare": true, 00:29:56.263 "compare_and_write": true, 00:29:56.263 "abort": true, 00:29:56.263 "seek_hole": false, 00:29:56.263 "seek_data": false, 00:29:56.263 "copy": true, 00:29:56.263 "nvme_iov_md": false 00:29:56.263 }, 00:29:56.263 "memory_domains": [ 00:29:56.263 { 00:29:56.263 "dma_device_id": "system", 00:29:56.263 "dma_device_type": 1 00:29:56.263 } 00:29:56.263 ], 00:29:56.263 "driver_specific": { 00:29:56.263 "nvme": [ 00:29:56.263 { 00:29:56.263 "trid": { 00:29:56.263 "trtype": "TCP", 00:29:56.263 "adrfam": "IPv4", 00:29:56.263 "traddr": "10.0.0.2", 00:29:56.263 "trsvcid": "4421", 00:29:56.263 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:56.263 }, 00:29:56.263 "ctrlr_data": { 00:29:56.263 "cntlid": 3, 00:29:56.263 "vendor_id": "0x8086", 00:29:56.263 "model_number": "SPDK bdev Controller", 00:29:56.263 "serial_number": "00000000000000000000", 00:29:56.263 "firmware_revision": "25.01", 00:29:56.263 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:56.263 "oacs": { 00:29:56.263 "security": 0, 00:29:56.263 "format": 0, 00:29:56.263 "firmware": 0, 00:29:56.263 "ns_manage": 0 00:29:56.263 }, 00:29:56.263 "multi_ctrlr": true, 00:29:56.263 "ana_reporting": false 00:29:56.263 }, 00:29:56.263 "vs": { 00:29:56.263 "nvme_version": "1.3" 00:29:56.263 }, 00:29:56.263 "ns_data": { 00:29:56.263 "id": 1, 00:29:56.263 "can_share": true 00:29:56.264 } 00:29:56.264 } 00:29:56.264 ], 00:29:56.264 "mp_policy": "active_passive" 00:29:56.264 } 00:29:56.264 } 00:29:56.264 ] 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.UoJj8ChSRB 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:56.264 rmmod nvme_tcp 00:29:56.264 rmmod nvme_fabrics 00:29:56.264 rmmod nvme_keyring 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1114415 ']' 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1114415 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1114415 ']' 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1114415 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1114415 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1114415' 00:29:56.264 killing process with pid 1114415 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1114415 00:29:56.264 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1114415 00:29:56.523 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:56.523 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:56.523 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:56.523 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:56.523 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:56.523 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:56.523 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:56.523 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:56.523 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:56.523 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:56.523 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:56.523 02:51:26 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.428 02:51:29 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:58.428 00:29:58.429 real 0m9.352s 00:29:58.429 user 0m3.002s 00:29:58.429 sys 0m4.786s 00:29:58.429 02:51:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:58.429 02:51:29 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:58.429 ************************************ 00:29:58.429 END TEST nvmf_async_init 00:29:58.429 ************************************ 00:29:58.429 02:51:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:58.429 02:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:58.429 02:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.429 02:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.688 ************************************ 00:29:58.688 START TEST dma 00:29:58.688 ************************************ 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:58.688 * Looking for test storage... 00:29:58.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:58.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.688 --rc genhtml_branch_coverage=1 00:29:58.688 --rc genhtml_function_coverage=1 00:29:58.688 --rc genhtml_legend=1 00:29:58.688 --rc geninfo_all_blocks=1 00:29:58.688 --rc geninfo_unexecuted_blocks=1 00:29:58.688 00:29:58.688 ' 00:29:58.688 02:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:58.688 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.689 --rc genhtml_branch_coverage=1 00:29:58.689 --rc genhtml_function_coverage=1 00:29:58.689 --rc genhtml_legend=1 00:29:58.689 --rc geninfo_all_blocks=1 00:29:58.689 --rc geninfo_unexecuted_blocks=1 00:29:58.689 00:29:58.689 ' 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.689 --rc genhtml_branch_coverage=1 00:29:58.689 --rc genhtml_function_coverage=1 00:29:58.689 --rc genhtml_legend=1 00:29:58.689 --rc geninfo_all_blocks=1 00:29:58.689 --rc geninfo_unexecuted_blocks=1 00:29:58.689 00:29:58.689 ' 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:58.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.689 --rc genhtml_branch_coverage=1 00:29:58.689 --rc genhtml_function_coverage=1 00:29:58.689 --rc genhtml_legend=1 00:29:58.689 --rc geninfo_all_blocks=1 00:29:58.689 --rc geninfo_unexecuted_blocks=1 00:29:58.689 00:29:58.689 ' 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:58.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:58.689 00:29:58.689 real 0m0.206s 00:29:58.689 user 0m0.125s 00:29:58.689 sys 0m0.095s 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:58.689 ************************************ 00:29:58.689 END TEST dma 00:29:58.689 ************************************ 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.689 02:51:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.949 ************************************ 00:29:58.949 START TEST nvmf_identify 00:29:58.949 ************************************ 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:58.949 * Looking for test storage... 00:29:58.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:58.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.949 --rc genhtml_branch_coverage=1 00:29:58.949 --rc genhtml_function_coverage=1 00:29:58.949 --rc genhtml_legend=1 00:29:58.949 --rc geninfo_all_blocks=1 00:29:58.949 --rc geninfo_unexecuted_blocks=1 00:29:58.949 00:29:58.949 ' 00:29:58.949 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:58.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.949 --rc genhtml_branch_coverage=1 00:29:58.949 --rc genhtml_function_coverage=1 00:29:58.949 --rc genhtml_legend=1 00:29:58.949 --rc geninfo_all_blocks=1 00:29:58.949 --rc geninfo_unexecuted_blocks=1 00:29:58.949 00:29:58.950 ' 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:58.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.950 --rc genhtml_branch_coverage=1 00:29:58.950 --rc genhtml_function_coverage=1 00:29:58.950 --rc genhtml_legend=1 00:29:58.950 --rc geninfo_all_blocks=1 00:29:58.950 --rc geninfo_unexecuted_blocks=1 00:29:58.950 00:29:58.950 ' 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:58.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.950 --rc genhtml_branch_coverage=1 00:29:58.950 --rc genhtml_function_coverage=1 00:29:58.950 --rc genhtml_legend=1 00:29:58.950 --rc geninfo_all_blocks=1 00:29:58.950 --rc geninfo_unexecuted_blocks=1 00:29:58.950 00:29:58.950 ' 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:58.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:58.950 02:51:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:05.523 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:05.523 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.523 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:05.524 Found net devices under 0000:af:00.0: cvl_0_0 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:05.524 Found net devices under 0000:af:00.1: cvl_0_1 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:05.524 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:05.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:30:05.524 00:30:05.524 --- 10.0.0.2 ping statistics --- 00:30:05.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.524 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:05.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:05.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:30:05.524 00:30:05.524 --- 10.0.0.1 ping statistics --- 00:30:05.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:05.524 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1118134 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1118134 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1118134 ']' 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:05.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.524 [2024-12-16 02:51:35.541727] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:05.524 [2024-12-16 02:51:35.541772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.524 [2024-12-16 02:51:35.621831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.524 [2024-12-16 02:51:35.646457] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.524 [2024-12-16 02:51:35.646495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.524 [2024-12-16 02:51:35.646502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:05.524 [2024-12-16 02:51:35.646509] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:05.524 [2024-12-16 02:51:35.646514] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.524 [2024-12-16 02:51:35.647984] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:05.524 [2024-12-16 02:51:35.648096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:05.524 [2024-12-16 02:51:35.648111] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:05.524 [2024-12-16 02:51:35.648117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.524 [2024-12-16 02:51:35.740293] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.524 Malloc0 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.524 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.525 [2024-12-16 02:51:35.844577] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.525 [ 00:30:05.525 { 00:30:05.525 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:05.525 "subtype": "Discovery", 00:30:05.525 "listen_addresses": [ 00:30:05.525 { 00:30:05.525 "trtype": "TCP", 00:30:05.525 "adrfam": "IPv4", 00:30:05.525 "traddr": "10.0.0.2", 00:30:05.525 "trsvcid": "4420" 00:30:05.525 } 00:30:05.525 ], 00:30:05.525 "allow_any_host": true, 00:30:05.525 "hosts": [] 00:30:05.525 }, 00:30:05.525 { 00:30:05.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.525 "subtype": "NVMe", 00:30:05.525 "listen_addresses": [ 00:30:05.525 { 00:30:05.525 "trtype": "TCP", 00:30:05.525 "adrfam": "IPv4", 00:30:05.525 "traddr": "10.0.0.2", 00:30:05.525 "trsvcid": "4420" 00:30:05.525 } 00:30:05.525 ], 00:30:05.525 "allow_any_host": true, 00:30:05.525 "hosts": [], 00:30:05.525 "serial_number": "SPDK00000000000001", 00:30:05.525 "model_number": "SPDK bdev Controller", 00:30:05.525 "max_namespaces": 32, 00:30:05.525 "min_cntlid": 1, 00:30:05.525 "max_cntlid": 65519, 00:30:05.525 "namespaces": [ 00:30:05.525 { 00:30:05.525 "nsid": 1, 00:30:05.525 "bdev_name": "Malloc0", 00:30:05.525 "name": "Malloc0", 00:30:05.525 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:05.525 "eui64": "ABCDEF0123456789", 00:30:05.525 "uuid": "99515b98-6863-437e-8275-ef8ac80d1076" 00:30:05.525 } 00:30:05.525 ] 00:30:05.525 } 00:30:05.525 ] 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.525 02:51:35 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:05.525 [2024-12-16 02:51:35.899545] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:05.525 [2024-12-16 02:51:35.899582] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1118208 ] 00:30:05.525 [2024-12-16 02:51:35.940313] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:05.525 [2024-12-16 02:51:35.940357] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:05.525 [2024-12-16 02:51:35.940361] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:05.525 [2024-12-16 02:51:35.940372] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:05.525 [2024-12-16 02:51:35.940380] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:05.525 [2024-12-16 02:51:35.944064] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:05.525 [2024-12-16 02:51:35.944102] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xac6ed0 0 00:30:05.525 [2024-12-16 02:51:35.944284] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:05.525 [2024-12-16 02:51:35.944292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:05.525 [2024-12-16 02:51:35.944296] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:05.525 [2024-12-16 02:51:35.944299] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:05.525 [2024-12-16 02:51:35.944325] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.944330] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.944333] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac6ed0) 00:30:05.525 [2024-12-16 02:51:35.944345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:05.525 [2024-12-16 02:51:35.944357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32540, cid 0, qid 0 00:30:05.525 [2024-12-16 02:51:35.950857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.525 [2024-12-16 02:51:35.950865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.525 [2024-12-16 02:51:35.950869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.950873] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32540) on tqpair=0xac6ed0 00:30:05.525 [2024-12-16 02:51:35.950885] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:05.525 [2024-12-16 02:51:35.950891] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:05.525 [2024-12-16 02:51:35.950896] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:05.525 [2024-12-16 02:51:35.950906] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.950910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.950913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac6ed0) 00:30:05.525 [2024-12-16 02:51:35.950920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.525 [2024-12-16 02:51:35.950931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32540, cid 0, qid 0 00:30:05.525 [2024-12-16 02:51:35.951103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.525 [2024-12-16 02:51:35.951109] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.525 [2024-12-16 02:51:35.951114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.951118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32540) on tqpair=0xac6ed0 00:30:05.525 [2024-12-16 02:51:35.951123] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:05.525 [2024-12-16 02:51:35.951129] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:05.525 [2024-12-16 02:51:35.951135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.951138] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.951141] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac6ed0) 00:30:05.525 [2024-12-16 02:51:35.951147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.525 [2024-12-16 02:51:35.951157] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32540, cid 0, qid 0 00:30:05.525 [2024-12-16 02:51:35.951220] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.525 [2024-12-16 02:51:35.951226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.525 [2024-12-16 02:51:35.951229] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.951233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32540) on tqpair=0xac6ed0 00:30:05.525 [2024-12-16 02:51:35.951237] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:05.525 [2024-12-16 02:51:35.951244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:05.525 [2024-12-16 02:51:35.951250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.951253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.951256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac6ed0) 00:30:05.525 [2024-12-16 02:51:35.951261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.525 [2024-12-16 02:51:35.951270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32540, cid 0, qid 0 00:30:05.525 [2024-12-16 02:51:35.951333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.525 [2024-12-16 02:51:35.951339] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.525 [2024-12-16 02:51:35.951341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.951345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32540) on tqpair=0xac6ed0 00:30:05.525 [2024-12-16 02:51:35.951349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:05.525 [2024-12-16 02:51:35.951357] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.951361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.951364] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac6ed0) 00:30:05.525 [2024-12-16 02:51:35.951369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.525 [2024-12-16 02:51:35.951378] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32540, cid 0, qid 0 00:30:05.525 [2024-12-16 02:51:35.951446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.525 [2024-12-16 02:51:35.951451] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.525 [2024-12-16 02:51:35.951454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.525 [2024-12-16 02:51:35.951458] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32540) on tqpair=0xac6ed0 00:30:05.525 [2024-12-16 02:51:35.951463] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:05.525 [2024-12-16 02:51:35.951468] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:05.526 [2024-12-16 02:51:35.951475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:05.526 [2024-12-16 02:51:35.951583] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:05.526 [2024-12-16 02:51:35.951587] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:05.526 [2024-12-16 02:51:35.951594] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.951597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.951600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac6ed0) 00:30:05.526 [2024-12-16 02:51:35.951605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.526 [2024-12-16 02:51:35.951615] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32540, cid 0, qid 0 00:30:05.526 [2024-12-16 02:51:35.951677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.526 [2024-12-16 02:51:35.951683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.526 [2024-12-16 02:51:35.951685] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.951689] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32540) on tqpair=0xac6ed0 00:30:05.526 [2024-12-16 02:51:35.951693] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:05.526 [2024-12-16 02:51:35.951701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.951704] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.951707] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac6ed0) 00:30:05.526 [2024-12-16 02:51:35.951713] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.526 [2024-12-16 02:51:35.951721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32540, cid 0, qid 0 00:30:05.526 [2024-12-16 02:51:35.951789] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.526 [2024-12-16 02:51:35.951794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.526 [2024-12-16 02:51:35.951797] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.951800] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32540) on tqpair=0xac6ed0 00:30:05.526 [2024-12-16 02:51:35.951804] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:05.526 [2024-12-16 02:51:35.951808] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:05.526 [2024-12-16 02:51:35.951816] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:05.526 [2024-12-16 02:51:35.951823] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:05.526 [2024-12-16 02:51:35.951830] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.951833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac6ed0) 00:30:05.526 [2024-12-16 02:51:35.951839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.526 [2024-12-16 02:51:35.951856] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32540, cid 0, qid 0 00:30:05.526 [2024-12-16 02:51:35.951951] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.526 [2024-12-16 02:51:35.951957] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.526 [2024-12-16 02:51:35.951960] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.951963] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac6ed0): datao=0, datal=4096, cccid=0 00:30:05.526 [2024-12-16 02:51:35.951967] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb32540) on tqpair(0xac6ed0): expected_datao=0, payload_size=4096 00:30:05.526 [2024-12-16 02:51:35.951971] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.951977] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.951980] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.951997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.526 [2024-12-16 02:51:35.952002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.526 [2024-12-16 02:51:35.952005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952008] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32540) on tqpair=0xac6ed0 00:30:05.526 [2024-12-16 02:51:35.952015] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:05.526 [2024-12-16 02:51:35.952019] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:05.526 [2024-12-16 02:51:35.952023] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:05.526 [2024-12-16 02:51:35.952027] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:05.526 [2024-12-16 02:51:35.952031] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:05.526 [2024-12-16 02:51:35.952035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:05.526 [2024-12-16 02:51:35.952045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:05.526 [2024-12-16 02:51:35.952055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac6ed0) 00:30:05.526 [2024-12-16 02:51:35.952068] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:05.526 [2024-12-16 02:51:35.952077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32540, cid 0, qid 0 00:30:05.526 [2024-12-16 02:51:35.952137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.526 [2024-12-16 02:51:35.952142] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.526 [2024-12-16 02:51:35.952145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32540) on tqpair=0xac6ed0 00:30:05.526 [2024-12-16 02:51:35.952155] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952161] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xac6ed0) 00:30:05.526 [2024-12-16 02:51:35.952166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.526 [2024-12-16 02:51:35.952173] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952176] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952179] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xac6ed0) 00:30:05.526 [2024-12-16 02:51:35.952184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.526 [2024-12-16 02:51:35.952189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952195] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xac6ed0) 00:30:05.526 [2024-12-16 02:51:35.952200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.526 [2024-12-16 02:51:35.952205] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac6ed0) 00:30:05.526 [2024-12-16 02:51:35.952215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.526 [2024-12-16 02:51:35.952220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:05.526 [2024-12-16 02:51:35.952234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:05.526 [2024-12-16 02:51:35.952240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac6ed0) 00:30:05.526 [2024-12-16 02:51:35.952248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.526 [2024-12-16 02:51:35.952258] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32540, cid 0, qid 0 00:30:05.526 [2024-12-16 02:51:35.952263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb326c0, cid 1, qid 0 00:30:05.526 [2024-12-16 02:51:35.952267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32840, cid 2, qid 0 00:30:05.526 [2024-12-16 02:51:35.952271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb329c0, cid 3, qid 0 00:30:05.526 [2024-12-16 02:51:35.952275] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32b40, cid 4, qid 0 00:30:05.526 [2024-12-16 02:51:35.952369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.526 [2024-12-16 02:51:35.952375] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.526 [2024-12-16 02:51:35.952378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32b40) on tqpair=0xac6ed0 00:30:05.526 [2024-12-16 02:51:35.952385] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:05.526 [2024-12-16 02:51:35.952390] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:05.526 [2024-12-16 02:51:35.952398] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac6ed0) 00:30:05.526 [2024-12-16 02:51:35.952407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.526 [2024-12-16 02:51:35.952416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32b40, cid 4, qid 0 00:30:05.526 [2024-12-16 02:51:35.952488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.526 [2024-12-16 02:51:35.952496] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.526 [2024-12-16 02:51:35.952499] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.526 [2024-12-16 02:51:35.952502] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac6ed0): datao=0, datal=4096, cccid=4 00:30:05.526 [2024-12-16 02:51:35.952506] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb32b40) on tqpair(0xac6ed0): expected_datao=0, payload_size=4096 00:30:05.527 [2024-12-16 02:51:35.952510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:35.952515] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:35.952518] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:35.993854] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.527 [2024-12-16 02:51:35.993865] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.527 [2024-12-16 02:51:35.993869] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:35.993872] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32b40) on tqpair=0xac6ed0 00:30:05.527 [2024-12-16 02:51:35.993886] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:05.527 [2024-12-16 02:51:35.993910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:35.993914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac6ed0) 00:30:05.527 [2024-12-16 02:51:35.993922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.527 [2024-12-16 02:51:35.993928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:35.993931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:35.993934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xac6ed0) 00:30:05.527 [2024-12-16 02:51:35.993939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.527 [2024-12-16 02:51:35.993955] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32b40, cid 4, qid 0 00:30:05.527 [2024-12-16 02:51:35.993960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32cc0, cid 5, qid 0 00:30:05.527 [2024-12-16 02:51:35.994062] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.527 [2024-12-16 02:51:35.994068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.527 [2024-12-16 02:51:35.994071] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:35.994075] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac6ed0): datao=0, datal=1024, cccid=4 00:30:05.527 [2024-12-16 02:51:35.994079] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb32b40) on tqpair(0xac6ed0): expected_datao=0, payload_size=1024 00:30:05.527 [2024-12-16 02:51:35.994083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:35.994088] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:35.994092] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:35.994097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.527 [2024-12-16 02:51:35.994101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.527 [2024-12-16 02:51:35.994104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:35.994108] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32cc0) on tqpair=0xac6ed0 00:30:05.527 [2024-12-16 02:51:36.036000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.527 [2024-12-16 02:51:36.036009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.527 [2024-12-16 02:51:36.036012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:36.036015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32b40) on tqpair=0xac6ed0 00:30:05.527 [2024-12-16 02:51:36.036028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:36.036032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac6ed0) 00:30:05.527 [2024-12-16 02:51:36.036038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.527 [2024-12-16 02:51:36.036054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32b40, cid 4, qid 0 00:30:05.527 [2024-12-16 02:51:36.036135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.527 [2024-12-16 02:51:36.036141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.527 [2024-12-16 02:51:36.036144] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:36.036147] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac6ed0): datao=0, datal=3072, cccid=4 00:30:05.527 [2024-12-16 02:51:36.036152] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb32b40) on tqpair(0xac6ed0): expected_datao=0, payload_size=3072 00:30:05.527 [2024-12-16 02:51:36.036157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:36.036163] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:36.036166] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:36.036191] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.527 [2024-12-16 02:51:36.036197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.527 [2024-12-16 02:51:36.036199] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:36.036203] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32b40) on tqpair=0xac6ed0 00:30:05.527 [2024-12-16 02:51:36.036210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:36.036213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xac6ed0) 00:30:05.527 [2024-12-16 02:51:36.036219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.527 [2024-12-16 02:51:36.036232] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb32b40, cid 4, qid 0 00:30:05.527 [2024-12-16 02:51:36.036307] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.527 [2024-12-16 02:51:36.036312] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.527 [2024-12-16 02:51:36.036315] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:36.036318] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xac6ed0): datao=0, datal=8, cccid=4 00:30:05.527 [2024-12-16 02:51:36.036322] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xb32b40) on tqpair(0xac6ed0): expected_datao=0, payload_size=8 00:30:05.527 [2024-12-16 02:51:36.036326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:36.036331] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:36.036335] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:36.077009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.527 [2024-12-16 02:51:36.077023] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.527 [2024-12-16 02:51:36.077026] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.527 [2024-12-16 02:51:36.077030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32b40) on tqpair=0xac6ed0 00:30:05.527 ===================================================== 00:30:05.527 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:05.527 ===================================================== 00:30:05.527 Controller Capabilities/Features 00:30:05.527 ================================ 00:30:05.527 Vendor ID: 0000 00:30:05.527 Subsystem Vendor ID: 0000 00:30:05.527 Serial Number: .................... 00:30:05.527 Model Number: ........................................ 00:30:05.527 Firmware Version: 25.01 00:30:05.527 Recommended Arb Burst: 0 00:30:05.527 IEEE OUI Identifier: 00 00 00 00:30:05.527 Multi-path I/O 00:30:05.527 May have multiple subsystem ports: No 00:30:05.527 May have multiple controllers: No 00:30:05.527 Associated with SR-IOV VF: No 00:30:05.527 Max Data Transfer Size: 131072 00:30:05.527 Max Number of Namespaces: 0 00:30:05.527 Max Number of I/O Queues: 1024 00:30:05.527 NVMe Specification Version (VS): 1.3 00:30:05.527 NVMe Specification Version (Identify): 1.3 00:30:05.527 Maximum Queue Entries: 128 00:30:05.527 Contiguous Queues Required: Yes 00:30:05.527 Arbitration Mechanisms Supported 00:30:05.527 Weighted Round Robin: Not Supported 00:30:05.527 Vendor Specific: Not Supported 00:30:05.527 Reset Timeout: 15000 ms 00:30:05.527 Doorbell Stride: 4 bytes 00:30:05.527 NVM Subsystem Reset: Not Supported 00:30:05.527 Command Sets Supported 00:30:05.527 NVM Command Set: Supported 00:30:05.527 Boot Partition: Not Supported 00:30:05.527 Memory Page Size Minimum: 4096 bytes 00:30:05.527 Memory Page Size Maximum: 4096 bytes 00:30:05.527 Persistent Memory Region: Not Supported 00:30:05.527 Optional Asynchronous Events Supported 00:30:05.527 Namespace Attribute Notices: Not Supported 00:30:05.527 Firmware Activation Notices: Not Supported 00:30:05.527 ANA Change Notices: Not Supported 00:30:05.527 PLE Aggregate Log Change Notices: Not Supported 00:30:05.527 LBA Status Info Alert Notices: Not Supported 00:30:05.527 EGE Aggregate Log Change Notices: Not Supported 00:30:05.527 Normal NVM Subsystem Shutdown event: Not Supported 00:30:05.527 Zone Descriptor Change Notices: Not Supported 00:30:05.527 Discovery Log Change Notices: Supported 00:30:05.527 Controller Attributes 00:30:05.527 128-bit Host Identifier: Not Supported 00:30:05.527 Non-Operational Permissive Mode: Not Supported 00:30:05.527 NVM Sets: Not Supported 00:30:05.527 Read Recovery Levels: Not Supported 00:30:05.527 Endurance Groups: Not Supported 00:30:05.527 Predictable Latency Mode: Not Supported 00:30:05.527 Traffic Based Keep ALive: Not Supported 00:30:05.527 Namespace Granularity: Not Supported 00:30:05.527 SQ Associations: Not Supported 00:30:05.527 UUID List: Not Supported 00:30:05.527 Multi-Domain Subsystem: Not Supported 00:30:05.527 Fixed Capacity Management: Not Supported 00:30:05.527 Variable Capacity Management: Not Supported 00:30:05.527 Delete Endurance Group: Not Supported 00:30:05.527 Delete NVM Set: Not Supported 00:30:05.527 Extended LBA Formats Supported: Not Supported 00:30:05.527 Flexible Data Placement Supported: Not Supported 00:30:05.527 00:30:05.527 Controller Memory Buffer Support 00:30:05.527 ================================ 00:30:05.527 Supported: No 00:30:05.527 00:30:05.527 Persistent Memory Region Support 00:30:05.527 ================================ 00:30:05.527 Supported: No 00:30:05.527 00:30:05.527 Admin Command Set Attributes 00:30:05.527 ============================ 00:30:05.527 Security Send/Receive: Not Supported 00:30:05.527 Format NVM: Not Supported 00:30:05.527 Firmware Activate/Download: Not Supported 00:30:05.527 Namespace Management: Not Supported 00:30:05.527 Device Self-Test: Not Supported 00:30:05.527 Directives: Not Supported 00:30:05.527 NVMe-MI: Not Supported 00:30:05.528 Virtualization Management: Not Supported 00:30:05.528 Doorbell Buffer Config: Not Supported 00:30:05.528 Get LBA Status Capability: Not Supported 00:30:05.528 Command & Feature Lockdown Capability: Not Supported 00:30:05.528 Abort Command Limit: 1 00:30:05.528 Async Event Request Limit: 4 00:30:05.528 Number of Firmware Slots: N/A 00:30:05.528 Firmware Slot 1 Read-Only: N/A 00:30:05.528 Firmware Activation Without Reset: N/A 00:30:05.528 Multiple Update Detection Support: N/A 00:30:05.528 Firmware Update Granularity: No Information Provided 00:30:05.528 Per-Namespace SMART Log: No 00:30:05.528 Asymmetric Namespace Access Log Page: Not Supported 00:30:05.528 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:05.528 Command Effects Log Page: Not Supported 00:30:05.528 Get Log Page Extended Data: Supported 00:30:05.528 Telemetry Log Pages: Not Supported 00:30:05.528 Persistent Event Log Pages: Not Supported 00:30:05.528 Supported Log Pages Log Page: May Support 00:30:05.528 Commands Supported & Effects Log Page: Not Supported 00:30:05.528 Feature Identifiers & Effects Log Page:May Support 00:30:05.528 NVMe-MI Commands & Effects Log Page: May Support 00:30:05.528 Data Area 4 for Telemetry Log: Not Supported 00:30:05.528 Error Log Page Entries Supported: 128 00:30:05.528 Keep Alive: Not Supported 00:30:05.528 00:30:05.528 NVM Command Set Attributes 00:30:05.528 ========================== 00:30:05.528 Submission Queue Entry Size 00:30:05.528 Max: 1 00:30:05.528 Min: 1 00:30:05.528 Completion Queue Entry Size 00:30:05.528 Max: 1 00:30:05.528 Min: 1 00:30:05.528 Number of Namespaces: 0 00:30:05.528 Compare Command: Not Supported 00:30:05.528 Write Uncorrectable Command: Not Supported 00:30:05.528 Dataset Management Command: Not Supported 00:30:05.528 Write Zeroes Command: Not Supported 00:30:05.528 Set Features Save Field: Not Supported 00:30:05.528 Reservations: Not Supported 00:30:05.528 Timestamp: Not Supported 00:30:05.528 Copy: Not Supported 00:30:05.528 Volatile Write Cache: Not Present 00:30:05.528 Atomic Write Unit (Normal): 1 00:30:05.528 Atomic Write Unit (PFail): 1 00:30:05.528 Atomic Compare & Write Unit: 1 00:30:05.528 Fused Compare & Write: Supported 00:30:05.528 Scatter-Gather List 00:30:05.528 SGL Command Set: Supported 00:30:05.528 SGL Keyed: Supported 00:30:05.528 SGL Bit Bucket Descriptor: Not Supported 00:30:05.528 SGL Metadata Pointer: Not Supported 00:30:05.528 Oversized SGL: Not Supported 00:30:05.528 SGL Metadata Address: Not Supported 00:30:05.528 SGL Offset: Supported 00:30:05.528 Transport SGL Data Block: Not Supported 00:30:05.528 Replay Protected Memory Block: Not Supported 00:30:05.528 00:30:05.528 Firmware Slot Information 00:30:05.528 ========================= 00:30:05.528 Active slot: 0 00:30:05.528 00:30:05.528 00:30:05.528 Error Log 00:30:05.528 ========= 00:30:05.528 00:30:05.528 Active Namespaces 00:30:05.528 ================= 00:30:05.528 Discovery Log Page 00:30:05.528 ================== 00:30:05.528 Generation Counter: 2 00:30:05.528 Number of Records: 2 00:30:05.528 Record Format: 0 00:30:05.528 00:30:05.528 Discovery Log Entry 0 00:30:05.528 ---------------------- 00:30:05.528 Transport Type: 3 (TCP) 00:30:05.528 Address Family: 1 (IPv4) 00:30:05.528 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:05.528 Entry Flags: 00:30:05.528 Duplicate Returned Information: 1 00:30:05.528 Explicit Persistent Connection Support for Discovery: 1 00:30:05.528 Transport Requirements: 00:30:05.528 Secure Channel: Not Required 00:30:05.528 Port ID: 0 (0x0000) 00:30:05.528 Controller ID: 65535 (0xffff) 00:30:05.528 Admin Max SQ Size: 128 00:30:05.528 Transport Service Identifier: 4420 00:30:05.528 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:05.528 Transport Address: 10.0.0.2 00:30:05.528 Discovery Log Entry 1 00:30:05.528 ---------------------- 00:30:05.528 Transport Type: 3 (TCP) 00:30:05.528 Address Family: 1 (IPv4) 00:30:05.528 Subsystem Type: 2 (NVM Subsystem) 00:30:05.528 Entry Flags: 00:30:05.528 Duplicate Returned Information: 0 00:30:05.528 Explicit Persistent Connection Support for Discovery: 0 00:30:05.528 Transport Requirements: 00:30:05.528 Secure Channel: Not Required 00:30:05.528 Port ID: 0 (0x0000) 00:30:05.528 Controller ID: 65535 (0xffff) 00:30:05.528 Admin Max SQ Size: 128 00:30:05.528 Transport Service Identifier: 4420 00:30:05.528 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:05.528 Transport Address: 10.0.0.2 [2024-12-16 02:51:36.077111] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:05.528 [2024-12-16 02:51:36.077122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32540) on tqpair=0xac6ed0 00:30:05.528 [2024-12-16 02:51:36.077128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.528 [2024-12-16 02:51:36.077133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb326c0) on tqpair=0xac6ed0 00:30:05.528 [2024-12-16 02:51:36.077138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.528 [2024-12-16 02:51:36.077142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb32840) on tqpair=0xac6ed0 00:30:05.528 [2024-12-16 02:51:36.077146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.528 [2024-12-16 02:51:36.077150] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb329c0) on tqpair=0xac6ed0 00:30:05.528 [2024-12-16 02:51:36.077154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.528 [2024-12-16 02:51:36.077161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.528 [2024-12-16 02:51:36.077165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.528 [2024-12-16 02:51:36.077168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac6ed0) 00:30:05.528 [2024-12-16 02:51:36.077174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.528 [2024-12-16 02:51:36.077189] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb329c0, cid 3, qid 0 00:30:05.528 [2024-12-16 02:51:36.077253] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.528 [2024-12-16 02:51:36.077259] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.528 [2024-12-16 02:51:36.077262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.528 [2024-12-16 02:51:36.077265] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb329c0) on tqpair=0xac6ed0 00:30:05.528 [2024-12-16 02:51:36.077271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.528 [2024-12-16 02:51:36.077274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.528 [2024-12-16 02:51:36.077277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac6ed0) 00:30:05.528 [2024-12-16 02:51:36.077282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.528 [2024-12-16 02:51:36.077294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb329c0, cid 3, qid 0 00:30:05.528 [2024-12-16 02:51:36.077369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.528 [2024-12-16 02:51:36.077374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.528 [2024-12-16 02:51:36.077377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.528 [2024-12-16 02:51:36.077380] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb329c0) on tqpair=0xac6ed0 00:30:05.529 [2024-12-16 02:51:36.077385] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:05.529 [2024-12-16 02:51:36.077389] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:05.529 [2024-12-16 02:51:36.077397] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.077400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.077403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac6ed0) 00:30:05.529 [2024-12-16 02:51:36.077409] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.529 [2024-12-16 02:51:36.077418] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb329c0, cid 3, qid 0 00:30:05.529 [2024-12-16 02:51:36.077486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.529 [2024-12-16 02:51:36.077492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.529 [2024-12-16 02:51:36.077495] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.077498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb329c0) on tqpair=0xac6ed0 00:30:05.529 [2024-12-16 02:51:36.077508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.077512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.077515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac6ed0) 00:30:05.529 [2024-12-16 02:51:36.077520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.529 [2024-12-16 02:51:36.077529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb329c0, cid 3, qid 0 00:30:05.529 [2024-12-16 02:51:36.077603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.529 [2024-12-16 02:51:36.077608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.529 [2024-12-16 02:51:36.077611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.077614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb329c0) on tqpair=0xac6ed0 00:30:05.529 [2024-12-16 02:51:36.077622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.077626] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.077629] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac6ed0) 00:30:05.529 [2024-12-16 02:51:36.077634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.529 [2024-12-16 02:51:36.077643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb329c0, cid 3, qid 0 00:30:05.529 [2024-12-16 02:51:36.077701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.529 [2024-12-16 02:51:36.077706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.529 [2024-12-16 02:51:36.077709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.077712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb329c0) on tqpair=0xac6ed0 00:30:05.529 [2024-12-16 02:51:36.077721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.077724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.077727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac6ed0) 00:30:05.529 [2024-12-16 02:51:36.077733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.529 [2024-12-16 02:51:36.077742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb329c0, cid 3, qid 0 00:30:05.529 [2024-12-16 02:51:36.077820] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.529 [2024-12-16 02:51:36.077825] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.529 [2024-12-16 02:51:36.077828] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.077831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb329c0) on tqpair=0xac6ed0 00:30:05.529 [2024-12-16 02:51:36.077839] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.077843] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.081852] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xac6ed0) 00:30:05.529 [2024-12-16 02:51:36.081860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.529 [2024-12-16 02:51:36.081871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xb329c0, cid 3, qid 0 00:30:05.529 [2024-12-16 02:51:36.082016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.529 [2024-12-16 02:51:36.082021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.529 [2024-12-16 02:51:36.082024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.082028] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xb329c0) on tqpair=0xac6ed0 00:30:05.529 [2024-12-16 02:51:36.082034] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:30:05.529 00:30:05.529 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:05.529 [2024-12-16 02:51:36.122754] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:05.529 [2024-12-16 02:51:36.122794] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1118210 ] 00:30:05.529 [2024-12-16 02:51:36.164018] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:05.529 [2024-12-16 02:51:36.164055] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:05.529 [2024-12-16 02:51:36.164060] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:05.529 [2024-12-16 02:51:36.164071] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:05.529 [2024-12-16 02:51:36.164078] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:05.529 [2024-12-16 02:51:36.164487] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:05.529 [2024-12-16 02:51:36.164512] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x210fed0 0 00:30:05.529 [2024-12-16 02:51:36.174869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:05.529 [2024-12-16 02:51:36.174888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:05.529 [2024-12-16 02:51:36.174894] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:05.529 [2024-12-16 02:51:36.174898] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:05.529 [2024-12-16 02:51:36.174929] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.174935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.529 [2024-12-16 02:51:36.174940] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x210fed0) 00:30:05.529 [2024-12-16 02:51:36.174954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:05.529 [2024-12-16 02:51:36.174974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b540, cid 0, qid 0 00:30:05.790 [2024-12-16 02:51:36.185858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.791 [2024-12-16 02:51:36.185872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.791 [2024-12-16 02:51:36.185876] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.185879] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b540) on tqpair=0x210fed0 00:30:05.791 [2024-12-16 02:51:36.185888] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:05.791 [2024-12-16 02:51:36.185895] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:05.791 [2024-12-16 02:51:36.185900] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:05.791 [2024-12-16 02:51:36.185912] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.185916] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.185919] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x210fed0) 00:30:05.791 [2024-12-16 02:51:36.185926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.791 [2024-12-16 02:51:36.185945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b540, cid 0, qid 0 00:30:05.791 [2024-12-16 02:51:36.186115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.791 [2024-12-16 02:51:36.186121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.791 [2024-12-16 02:51:36.186124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b540) on tqpair=0x210fed0 00:30:05.791 [2024-12-16 02:51:36.186132] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:05.791 [2024-12-16 02:51:36.186139] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:05.791 [2024-12-16 02:51:36.186145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x210fed0) 00:30:05.791 [2024-12-16 02:51:36.186157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.791 [2024-12-16 02:51:36.186168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b540, cid 0, qid 0 00:30:05.791 [2024-12-16 02:51:36.186262] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.791 [2024-12-16 02:51:36.186267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.791 [2024-12-16 02:51:36.186271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b540) on tqpair=0x210fed0 00:30:05.791 [2024-12-16 02:51:36.186278] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:05.791 [2024-12-16 02:51:36.186285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:05.791 [2024-12-16 02:51:36.186291] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186294] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186298] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x210fed0) 00:30:05.791 [2024-12-16 02:51:36.186303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.791 [2024-12-16 02:51:36.186313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b540, cid 0, qid 0 00:30:05.791 [2024-12-16 02:51:36.186412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.791 [2024-12-16 02:51:36.186418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.791 [2024-12-16 02:51:36.186421] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b540) on tqpair=0x210fed0 00:30:05.791 [2024-12-16 02:51:36.186429] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:05.791 [2024-12-16 02:51:36.186437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186444] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x210fed0) 00:30:05.791 [2024-12-16 02:51:36.186449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.791 [2024-12-16 02:51:36.186459] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b540, cid 0, qid 0 00:30:05.791 [2024-12-16 02:51:36.186520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.791 [2024-12-16 02:51:36.186525] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.791 [2024-12-16 02:51:36.186530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186533] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b540) on tqpair=0x210fed0 00:30:05.791 [2024-12-16 02:51:36.186537] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:05.791 [2024-12-16 02:51:36.186542] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:05.791 [2024-12-16 02:51:36.186549] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:05.791 [2024-12-16 02:51:36.186656] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:05.791 [2024-12-16 02:51:36.186661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:05.791 [2024-12-16 02:51:36.186667] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186673] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x210fed0) 00:30:05.791 [2024-12-16 02:51:36.186679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.791 [2024-12-16 02:51:36.186688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b540, cid 0, qid 0 00:30:05.791 [2024-12-16 02:51:36.186753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.791 [2024-12-16 02:51:36.186759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.791 [2024-12-16 02:51:36.186762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186765] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b540) on tqpair=0x210fed0 00:30:05.791 [2024-12-16 02:51:36.186770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:05.791 [2024-12-16 02:51:36.186778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186781] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186784] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x210fed0) 00:30:05.791 [2024-12-16 02:51:36.186790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.791 [2024-12-16 02:51:36.186799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b540, cid 0, qid 0 00:30:05.791 [2024-12-16 02:51:36.186905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.791 [2024-12-16 02:51:36.186911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.791 [2024-12-16 02:51:36.186914] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b540) on tqpair=0x210fed0 00:30:05.791 [2024-12-16 02:51:36.186921] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:05.791 [2024-12-16 02:51:36.186925] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:05.791 [2024-12-16 02:51:36.186932] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:05.791 [2024-12-16 02:51:36.186939] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:05.791 [2024-12-16 02:51:36.186946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.186950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x210fed0) 00:30:05.791 [2024-12-16 02:51:36.186957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.791 [2024-12-16 02:51:36.186967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b540, cid 0, qid 0 00:30:05.791 [2024-12-16 02:51:36.187061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.791 [2024-12-16 02:51:36.187067] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.791 [2024-12-16 02:51:36.187070] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.187073] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x210fed0): datao=0, datal=4096, cccid=0 00:30:05.791 [2024-12-16 02:51:36.187077] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x217b540) on tqpair(0x210fed0): expected_datao=0, payload_size=4096 00:30:05.791 [2024-12-16 02:51:36.187081] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.187087] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.187090] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.187106] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.791 [2024-12-16 02:51:36.187111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.791 [2024-12-16 02:51:36.187114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.187118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b540) on tqpair=0x210fed0 00:30:05.791 [2024-12-16 02:51:36.187123] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:05.791 [2024-12-16 02:51:36.187127] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:05.791 [2024-12-16 02:51:36.187131] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:05.791 [2024-12-16 02:51:36.187135] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:05.791 [2024-12-16 02:51:36.187139] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:05.791 [2024-12-16 02:51:36.187143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:05.791 [2024-12-16 02:51:36.187152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:05.791 [2024-12-16 02:51:36.187161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.187165] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.791 [2024-12-16 02:51:36.187168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x210fed0) 00:30:05.792 [2024-12-16 02:51:36.187173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:05.792 [2024-12-16 02:51:36.187183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b540, cid 0, qid 0 00:30:05.792 [2024-12-16 02:51:36.187260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.792 [2024-12-16 02:51:36.187266] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.792 [2024-12-16 02:51:36.187269] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187272] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b540) on tqpair=0x210fed0 00:30:05.792 [2024-12-16 02:51:36.187277] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x210fed0) 00:30:05.792 [2024-12-16 02:51:36.187289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.792 [2024-12-16 02:51:36.187296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x210fed0) 00:30:05.792 [2024-12-16 02:51:36.187307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.792 [2024-12-16 02:51:36.187312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x210fed0) 00:30:05.792 [2024-12-16 02:51:36.187324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.792 [2024-12-16 02:51:36.187328] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187332] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x210fed0) 00:30:05.792 [2024-12-16 02:51:36.187340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.792 [2024-12-16 02:51:36.187344] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:05.792 [2024-12-16 02:51:36.187354] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:05.792 [2024-12-16 02:51:36.187359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187363] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x210fed0) 00:30:05.792 [2024-12-16 02:51:36.187368] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.792 [2024-12-16 02:51:36.187379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b540, cid 0, qid 0 00:30:05.792 [2024-12-16 02:51:36.187384] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b6c0, cid 1, qid 0 00:30:05.792 [2024-12-16 02:51:36.187388] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b840, cid 2, qid 0 00:30:05.792 [2024-12-16 02:51:36.187392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b9c0, cid 3, qid 0 00:30:05.792 [2024-12-16 02:51:36.187396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bb40, cid 4, qid 0 00:30:05.792 [2024-12-16 02:51:36.187488] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.792 [2024-12-16 02:51:36.187494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.792 [2024-12-16 02:51:36.187497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bb40) on tqpair=0x210fed0 00:30:05.792 [2024-12-16 02:51:36.187504] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:05.792 [2024-12-16 02:51:36.187508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:05.792 [2024-12-16 02:51:36.187518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:05.792 [2024-12-16 02:51:36.187525] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:05.792 [2024-12-16 02:51:36.187530] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x210fed0) 00:30:05.792 [2024-12-16 02:51:36.187543] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:05.792 [2024-12-16 02:51:36.187553] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bb40, cid 4, qid 0 00:30:05.792 [2024-12-16 02:51:36.187661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.792 [2024-12-16 02:51:36.187666] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.792 [2024-12-16 02:51:36.187669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bb40) on tqpair=0x210fed0 00:30:05.792 [2024-12-16 02:51:36.187722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:05.792 [2024-12-16 02:51:36.187732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:05.792 [2024-12-16 02:51:36.187738] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187741] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x210fed0) 00:30:05.792 [2024-12-16 02:51:36.187746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.792 [2024-12-16 02:51:36.187756] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bb40, cid 4, qid 0 00:30:05.792 [2024-12-16 02:51:36.187830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.792 [2024-12-16 02:51:36.187836] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.792 [2024-12-16 02:51:36.187839] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187842] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x210fed0): datao=0, datal=4096, cccid=4 00:30:05.792 [2024-12-16 02:51:36.187851] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x217bb40) on tqpair(0x210fed0): expected_datao=0, payload_size=4096 00:30:05.792 [2024-12-16 02:51:36.187855] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187861] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187865] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187911] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.792 [2024-12-16 02:51:36.187917] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.792 [2024-12-16 02:51:36.187920] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bb40) on tqpair=0x210fed0 00:30:05.792 [2024-12-16 02:51:36.187932] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:05.792 [2024-12-16 02:51:36.187943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:05.792 [2024-12-16 02:51:36.187952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:05.792 [2024-12-16 02:51:36.187958] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.187961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x210fed0) 00:30:05.792 [2024-12-16 02:51:36.187966] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.792 [2024-12-16 02:51:36.187976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bb40, cid 4, qid 0 00:30:05.792 [2024-12-16 02:51:36.188063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.792 [2024-12-16 02:51:36.188068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.792 [2024-12-16 02:51:36.188073] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.188076] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x210fed0): datao=0, datal=4096, cccid=4 00:30:05.792 [2024-12-16 02:51:36.188080] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x217bb40) on tqpair(0x210fed0): expected_datao=0, payload_size=4096 00:30:05.792 [2024-12-16 02:51:36.188084] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.188089] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.188093] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.188113] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.792 [2024-12-16 02:51:36.188119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.792 [2024-12-16 02:51:36.188122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.188125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bb40) on tqpair=0x210fed0 00:30:05.792 [2024-12-16 02:51:36.188134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:05.792 [2024-12-16 02:51:36.188143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:05.792 [2024-12-16 02:51:36.188148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.188152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x210fed0) 00:30:05.792 [2024-12-16 02:51:36.188157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.792 [2024-12-16 02:51:36.188167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bb40, cid 4, qid 0 00:30:05.792 [2024-12-16 02:51:36.188241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.792 [2024-12-16 02:51:36.188247] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.792 [2024-12-16 02:51:36.188250] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.188253] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x210fed0): datao=0, datal=4096, cccid=4 00:30:05.792 [2024-12-16 02:51:36.188257] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x217bb40) on tqpair(0x210fed0): expected_datao=0, payload_size=4096 00:30:05.792 [2024-12-16 02:51:36.188260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.188266] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.188269] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.792 [2024-12-16 02:51:36.188315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.792 [2024-12-16 02:51:36.188321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.793 [2024-12-16 02:51:36.188324] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bb40) on tqpair=0x210fed0 00:30:05.793 [2024-12-16 02:51:36.188333] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:05.793 [2024-12-16 02:51:36.188340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:05.793 [2024-12-16 02:51:36.188347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:05.793 [2024-12-16 02:51:36.188352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:05.793 [2024-12-16 02:51:36.188356] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:05.793 [2024-12-16 02:51:36.188362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:05.793 [2024-12-16 02:51:36.188367] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:05.793 [2024-12-16 02:51:36.188371] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:05.793 [2024-12-16 02:51:36.188375] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:05.793 [2024-12-16 02:51:36.188388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x210fed0) 00:30:05.793 [2024-12-16 02:51:36.188397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.793 [2024-12-16 02:51:36.188403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x210fed0) 00:30:05.793 [2024-12-16 02:51:36.188414] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:05.793 [2024-12-16 02:51:36.188426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bb40, cid 4, qid 0 00:30:05.793 [2024-12-16 02:51:36.188430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bcc0, cid 5, qid 0 00:30:05.793 [2024-12-16 02:51:36.188546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.793 [2024-12-16 02:51:36.188552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.793 [2024-12-16 02:51:36.188554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bb40) on tqpair=0x210fed0 00:30:05.793 [2024-12-16 02:51:36.188563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.793 [2024-12-16 02:51:36.188568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.793 [2024-12-16 02:51:36.188571] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bcc0) on tqpair=0x210fed0 00:30:05.793 [2024-12-16 02:51:36.188582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x210fed0) 00:30:05.793 [2024-12-16 02:51:36.188591] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.793 [2024-12-16 02:51:36.188600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bcc0, cid 5, qid 0 00:30:05.793 [2024-12-16 02:51:36.188698] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.793 [2024-12-16 02:51:36.188703] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.793 [2024-12-16 02:51:36.188707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bcc0) on tqpair=0x210fed0 00:30:05.793 [2024-12-16 02:51:36.188717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x210fed0) 00:30:05.793 [2024-12-16 02:51:36.188726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.793 [2024-12-16 02:51:36.188735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bcc0, cid 5, qid 0 00:30:05.793 [2024-12-16 02:51:36.188794] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.793 [2024-12-16 02:51:36.188800] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.793 [2024-12-16 02:51:36.188803] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188806] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bcc0) on tqpair=0x210fed0 00:30:05.793 [2024-12-16 02:51:36.188814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188817] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x210fed0) 00:30:05.793 [2024-12-16 02:51:36.188822] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.793 [2024-12-16 02:51:36.188831] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bcc0, cid 5, qid 0 00:30:05.793 [2024-12-16 02:51:36.188921] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.793 [2024-12-16 02:51:36.188927] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.793 [2024-12-16 02:51:36.188930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188934] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bcc0) on tqpair=0x210fed0 00:30:05.793 [2024-12-16 02:51:36.188946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188950] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x210fed0) 00:30:05.793 [2024-12-16 02:51:36.188956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.793 [2024-12-16 02:51:36.188962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x210fed0) 00:30:05.793 [2024-12-16 02:51:36.188970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.793 [2024-12-16 02:51:36.188976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188979] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x210fed0) 00:30:05.793 [2024-12-16 02:51:36.188984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.793 [2024-12-16 02:51:36.188991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.188994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x210fed0) 00:30:05.793 [2024-12-16 02:51:36.188999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.793 [2024-12-16 02:51:36.189010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bcc0, cid 5, qid 0 00:30:05.793 [2024-12-16 02:51:36.189015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bb40, cid 4, qid 0 00:30:05.793 [2024-12-16 02:51:36.189019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217be40, cid 6, qid 0 00:30:05.793 [2024-12-16 02:51:36.189023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bfc0, cid 7, qid 0 00:30:05.793 [2024-12-16 02:51:36.189161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.793 [2024-12-16 02:51:36.189167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.793 [2024-12-16 02:51:36.189171] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189174] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x210fed0): datao=0, datal=8192, cccid=5 00:30:05.793 [2024-12-16 02:51:36.189177] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x217bcc0) on tqpair(0x210fed0): expected_datao=0, payload_size=8192 00:30:05.793 [2024-12-16 02:51:36.189184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189226] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189230] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.793 [2024-12-16 02:51:36.189239] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.793 [2024-12-16 02:51:36.189242] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189245] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x210fed0): datao=0, datal=512, cccid=4 00:30:05.793 [2024-12-16 02:51:36.189249] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x217bb40) on tqpair(0x210fed0): expected_datao=0, payload_size=512 00:30:05.793 [2024-12-16 02:51:36.189253] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189258] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189261] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189266] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.793 [2024-12-16 02:51:36.189271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.793 [2024-12-16 02:51:36.189274] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189277] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x210fed0): datao=0, datal=512, cccid=6 00:30:05.793 [2024-12-16 02:51:36.189281] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x217be40) on tqpair(0x210fed0): expected_datao=0, payload_size=512 00:30:05.793 [2024-12-16 02:51:36.189284] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189290] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189293] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189298] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:05.793 [2024-12-16 02:51:36.189302] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:05.793 [2024-12-16 02:51:36.189305] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189308] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x210fed0): datao=0, datal=4096, cccid=7 00:30:05.793 [2024-12-16 02:51:36.189312] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x217bfc0) on tqpair(0x210fed0): expected_datao=0, payload_size=4096 00:30:05.793 [2024-12-16 02:51:36.189316] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189321] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189325] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189333] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.793 [2024-12-16 02:51:36.189338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.793 [2024-12-16 02:51:36.189341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.793 [2024-12-16 02:51:36.189344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bcc0) on tqpair=0x210fed0 00:30:05.794 [2024-12-16 02:51:36.189353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.794 [2024-12-16 02:51:36.189359] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.794 [2024-12-16 02:51:36.189362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.794 [2024-12-16 02:51:36.189365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bb40) on tqpair=0x210fed0 00:30:05.794 [2024-12-16 02:51:36.189373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.794 [2024-12-16 02:51:36.189378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.794 [2024-12-16 02:51:36.189381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.794 [2024-12-16 02:51:36.189384] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217be40) on tqpair=0x210fed0 00:30:05.794 [2024-12-16 02:51:36.189392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.794 [2024-12-16 02:51:36.189397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.794 [2024-12-16 02:51:36.189399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.794 [2024-12-16 02:51:36.189403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bfc0) on tqpair=0x210fed0 00:30:05.794 ===================================================== 00:30:05.794 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:05.794 ===================================================== 00:30:05.794 Controller Capabilities/Features 00:30:05.794 ================================ 00:30:05.794 Vendor ID: 8086 00:30:05.794 Subsystem Vendor ID: 8086 00:30:05.794 Serial Number: SPDK00000000000001 00:30:05.794 Model Number: SPDK bdev Controller 00:30:05.794 Firmware Version: 25.01 00:30:05.794 Recommended Arb Burst: 6 00:30:05.794 IEEE OUI Identifier: e4 d2 5c 00:30:05.794 Multi-path I/O 00:30:05.794 May have multiple subsystem ports: Yes 00:30:05.794 May have multiple controllers: Yes 00:30:05.794 Associated with SR-IOV VF: No 00:30:05.794 Max Data Transfer Size: 131072 00:30:05.794 Max Number of Namespaces: 32 00:30:05.794 Max Number of I/O Queues: 127 00:30:05.794 NVMe Specification Version (VS): 1.3 00:30:05.794 NVMe Specification Version (Identify): 1.3 00:30:05.794 Maximum Queue Entries: 128 00:30:05.794 Contiguous Queues Required: Yes 00:30:05.794 Arbitration Mechanisms Supported 00:30:05.794 Weighted Round Robin: Not Supported 00:30:05.794 Vendor Specific: Not Supported 00:30:05.794 Reset Timeout: 15000 ms 00:30:05.794 Doorbell Stride: 4 bytes 00:30:05.794 NVM Subsystem Reset: Not Supported 00:30:05.794 Command Sets Supported 00:30:05.794 NVM Command Set: Supported 00:30:05.794 Boot Partition: Not Supported 00:30:05.794 Memory Page Size Minimum: 4096 bytes 00:30:05.794 Memory Page Size Maximum: 4096 bytes 00:30:05.794 Persistent Memory Region: Not Supported 00:30:05.794 Optional Asynchronous Events Supported 00:30:05.794 Namespace Attribute Notices: Supported 00:30:05.794 Firmware Activation Notices: Not Supported 00:30:05.794 ANA Change Notices: Not Supported 00:30:05.794 PLE Aggregate Log Change Notices: Not Supported 00:30:05.794 LBA Status Info Alert Notices: Not Supported 00:30:05.794 EGE Aggregate Log Change Notices: Not Supported 00:30:05.794 Normal NVM Subsystem Shutdown event: Not Supported 00:30:05.794 Zone Descriptor Change Notices: Not Supported 00:30:05.794 Discovery Log Change Notices: Not Supported 00:30:05.794 Controller Attributes 00:30:05.794 128-bit Host Identifier: Supported 00:30:05.794 Non-Operational Permissive Mode: Not Supported 00:30:05.794 NVM Sets: Not Supported 00:30:05.794 Read Recovery Levels: Not Supported 00:30:05.794 Endurance Groups: Not Supported 00:30:05.794 Predictable Latency Mode: Not Supported 00:30:05.794 Traffic Based Keep ALive: Not Supported 00:30:05.794 Namespace Granularity: Not Supported 00:30:05.794 SQ Associations: Not Supported 00:30:05.794 UUID List: Not Supported 00:30:05.794 Multi-Domain Subsystem: Not Supported 00:30:05.794 Fixed Capacity Management: Not Supported 00:30:05.794 Variable Capacity Management: Not Supported 00:30:05.794 Delete Endurance Group: Not Supported 00:30:05.794 Delete NVM Set: Not Supported 00:30:05.794 Extended LBA Formats Supported: Not Supported 00:30:05.794 Flexible Data Placement Supported: Not Supported 00:30:05.794 00:30:05.794 Controller Memory Buffer Support 00:30:05.794 ================================ 00:30:05.794 Supported: No 00:30:05.794 00:30:05.794 Persistent Memory Region Support 00:30:05.794 ================================ 00:30:05.794 Supported: No 00:30:05.794 00:30:05.794 Admin Command Set Attributes 00:30:05.794 ============================ 00:30:05.794 Security Send/Receive: Not Supported 00:30:05.794 Format NVM: Not Supported 00:30:05.794 Firmware Activate/Download: Not Supported 00:30:05.794 Namespace Management: Not Supported 00:30:05.794 Device Self-Test: Not Supported 00:30:05.794 Directives: Not Supported 00:30:05.794 NVMe-MI: Not Supported 00:30:05.794 Virtualization Management: Not Supported 00:30:05.794 Doorbell Buffer Config: Not Supported 00:30:05.794 Get LBA Status Capability: Not Supported 00:30:05.794 Command & Feature Lockdown Capability: Not Supported 00:30:05.794 Abort Command Limit: 4 00:30:05.794 Async Event Request Limit: 4 00:30:05.794 Number of Firmware Slots: N/A 00:30:05.794 Firmware Slot 1 Read-Only: N/A 00:30:05.794 Firmware Activation Without Reset: N/A 00:30:05.794 Multiple Update Detection Support: N/A 00:30:05.794 Firmware Update Granularity: No Information Provided 00:30:05.794 Per-Namespace SMART Log: No 00:30:05.794 Asymmetric Namespace Access Log Page: Not Supported 00:30:05.794 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:05.794 Command Effects Log Page: Supported 00:30:05.794 Get Log Page Extended Data: Supported 00:30:05.794 Telemetry Log Pages: Not Supported 00:30:05.794 Persistent Event Log Pages: Not Supported 00:30:05.794 Supported Log Pages Log Page: May Support 00:30:05.794 Commands Supported & Effects Log Page: Not Supported 00:30:05.794 Feature Identifiers & Effects Log Page:May Support 00:30:05.794 NVMe-MI Commands & Effects Log Page: May Support 00:30:05.794 Data Area 4 for Telemetry Log: Not Supported 00:30:05.794 Error Log Page Entries Supported: 128 00:30:05.794 Keep Alive: Supported 00:30:05.794 Keep Alive Granularity: 10000 ms 00:30:05.794 00:30:05.794 NVM Command Set Attributes 00:30:05.794 ========================== 00:30:05.794 Submission Queue Entry Size 00:30:05.794 Max: 64 00:30:05.794 Min: 64 00:30:05.794 Completion Queue Entry Size 00:30:05.794 Max: 16 00:30:05.794 Min: 16 00:30:05.794 Number of Namespaces: 32 00:30:05.794 Compare Command: Supported 00:30:05.794 Write Uncorrectable Command: Not Supported 00:30:05.794 Dataset Management Command: Supported 00:30:05.794 Write Zeroes Command: Supported 00:30:05.794 Set Features Save Field: Not Supported 00:30:05.794 Reservations: Supported 00:30:05.794 Timestamp: Not Supported 00:30:05.794 Copy: Supported 00:30:05.794 Volatile Write Cache: Present 00:30:05.794 Atomic Write Unit (Normal): 1 00:30:05.794 Atomic Write Unit (PFail): 1 00:30:05.794 Atomic Compare & Write Unit: 1 00:30:05.794 Fused Compare & Write: Supported 00:30:05.794 Scatter-Gather List 00:30:05.794 SGL Command Set: Supported 00:30:05.794 SGL Keyed: Supported 00:30:05.794 SGL Bit Bucket Descriptor: Not Supported 00:30:05.794 SGL Metadata Pointer: Not Supported 00:30:05.794 Oversized SGL: Not Supported 00:30:05.794 SGL Metadata Address: Not Supported 00:30:05.794 SGL Offset: Supported 00:30:05.794 Transport SGL Data Block: Not Supported 00:30:05.794 Replay Protected Memory Block: Not Supported 00:30:05.794 00:30:05.794 Firmware Slot Information 00:30:05.794 ========================= 00:30:05.794 Active slot: 1 00:30:05.794 Slot 1 Firmware Revision: 25.01 00:30:05.794 00:30:05.794 00:30:05.794 Commands Supported and Effects 00:30:05.794 ============================== 00:30:05.794 Admin Commands 00:30:05.794 -------------- 00:30:05.794 Get Log Page (02h): Supported 00:30:05.794 Identify (06h): Supported 00:30:05.794 Abort (08h): Supported 00:30:05.794 Set Features (09h): Supported 00:30:05.794 Get Features (0Ah): Supported 00:30:05.794 Asynchronous Event Request (0Ch): Supported 00:30:05.794 Keep Alive (18h): Supported 00:30:05.794 I/O Commands 00:30:05.794 ------------ 00:30:05.794 Flush (00h): Supported LBA-Change 00:30:05.794 Write (01h): Supported LBA-Change 00:30:05.794 Read (02h): Supported 00:30:05.794 Compare (05h): Supported 00:30:05.794 Write Zeroes (08h): Supported LBA-Change 00:30:05.794 Dataset Management (09h): Supported LBA-Change 00:30:05.794 Copy (19h): Supported LBA-Change 00:30:05.794 00:30:05.794 Error Log 00:30:05.794 ========= 00:30:05.794 00:30:05.794 Arbitration 00:30:05.794 =========== 00:30:05.794 Arbitration Burst: 1 00:30:05.794 00:30:05.794 Power Management 00:30:05.794 ================ 00:30:05.794 Number of Power States: 1 00:30:05.794 Current Power State: Power State #0 00:30:05.794 Power State #0: 00:30:05.794 Max Power: 0.00 W 00:30:05.794 Non-Operational State: Operational 00:30:05.794 Entry Latency: Not Reported 00:30:05.794 Exit Latency: Not Reported 00:30:05.794 Relative Read Throughput: 0 00:30:05.794 Relative Read Latency: 0 00:30:05.794 Relative Write Throughput: 0 00:30:05.794 Relative Write Latency: 0 00:30:05.794 Idle Power: Not Reported 00:30:05.795 Active Power: Not Reported 00:30:05.795 Non-Operational Permissive Mode: Not Supported 00:30:05.795 00:30:05.795 Health Information 00:30:05.795 ================== 00:30:05.795 Critical Warnings: 00:30:05.795 Available Spare Space: OK 00:30:05.795 Temperature: OK 00:30:05.795 Device Reliability: OK 00:30:05.795 Read Only: No 00:30:05.795 Volatile Memory Backup: OK 00:30:05.795 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:05.795 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:05.795 Available Spare: 0% 00:30:05.795 Available Spare Threshold: 0% 00:30:05.795 Life Percentage Used:[2024-12-16 02:51:36.189483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.795 [2024-12-16 02:51:36.189487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x210fed0) 00:30:05.795 [2024-12-16 02:51:36.189493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.795 [2024-12-16 02:51:36.189504] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217bfc0, cid 7, qid 0 00:30:05.795 [2024-12-16 02:51:36.189573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.795 [2024-12-16 02:51:36.189579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.795 [2024-12-16 02:51:36.189582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.795 [2024-12-16 02:51:36.189585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217bfc0) on tqpair=0x210fed0 00:30:05.795 [2024-12-16 02:51:36.189613] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:05.795 [2024-12-16 02:51:36.189622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b540) on tqpair=0x210fed0 00:30:05.795 [2024-12-16 02:51:36.189627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.795 [2024-12-16 02:51:36.189632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b6c0) on tqpair=0x210fed0 00:30:05.795 [2024-12-16 02:51:36.189636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.795 [2024-12-16 02:51:36.189640] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b840) on tqpair=0x210fed0 00:30:05.795 [2024-12-16 02:51:36.189644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.795 [2024-12-16 02:51:36.189649] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b9c0) on tqpair=0x210fed0 00:30:05.795 [2024-12-16 02:51:36.189652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:05.795 [2024-12-16 02:51:36.189659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.795 [2024-12-16 02:51:36.189663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.795 [2024-12-16 02:51:36.189666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x210fed0) 00:30:05.795 [2024-12-16 02:51:36.189671] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.795 [2024-12-16 02:51:36.189683] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b9c0, cid 3, qid 0 00:30:05.795 [2024-12-16 02:51:36.189773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.795 [2024-12-16 02:51:36.189779] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.795 [2024-12-16 02:51:36.189782] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.795 [2024-12-16 02:51:36.189785] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b9c0) on tqpair=0x210fed0 00:30:05.795 [2024-12-16 02:51:36.189791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.795 [2024-12-16 02:51:36.189794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.795 [2024-12-16 02:51:36.189797] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x210fed0) 00:30:05.795 [2024-12-16 02:51:36.189803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.795 [2024-12-16 02:51:36.189814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b9c0, cid 3, qid 0 00:30:05.795 [2024-12-16 02:51:36.193856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.795 [2024-12-16 02:51:36.193870] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.795 [2024-12-16 02:51:36.193873] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.795 [2024-12-16 02:51:36.193877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b9c0) on tqpair=0x210fed0 00:30:05.795 [2024-12-16 02:51:36.193882] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:05.795 [2024-12-16 02:51:36.193886] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:05.795 [2024-12-16 02:51:36.193900] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:05.795 [2024-12-16 02:51:36.193904] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:05.795 [2024-12-16 02:51:36.193907] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x210fed0) 00:30:05.795 [2024-12-16 02:51:36.193914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:05.795 [2024-12-16 02:51:36.193930] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x217b9c0, cid 3, qid 0 00:30:05.795 [2024-12-16 02:51:36.194082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:05.795 [2024-12-16 02:51:36.194087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:05.795 [2024-12-16 02:51:36.194090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:05.795 [2024-12-16 02:51:36.194094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x217b9c0) on tqpair=0x210fed0 00:30:05.795 [2024-12-16 02:51:36.194101] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 0 milliseconds 00:30:05.795 0% 00:30:05.795 Data Units Read: 0 00:30:05.795 Data Units Written: 0 00:30:05.795 Host Read Commands: 0 00:30:05.795 Host Write Commands: 0 00:30:05.795 Controller Busy Time: 0 minutes 00:30:05.795 Power Cycles: 0 00:30:05.795 Power On Hours: 0 hours 00:30:05.795 Unsafe Shutdowns: 0 00:30:05.795 Unrecoverable Media Errors: 0 00:30:05.795 Lifetime Error Log Entries: 0 00:30:05.795 Warning Temperature Time: 0 minutes 00:30:05.795 Critical Temperature Time: 0 minutes 00:30:05.795 00:30:05.795 Number of Queues 00:30:05.795 ================ 00:30:05.795 Number of I/O Submission Queues: 127 00:30:05.795 Number of I/O Completion Queues: 127 00:30:05.795 00:30:05.795 Active Namespaces 00:30:05.795 ================= 00:30:05.795 Namespace ID:1 00:30:05.795 Error Recovery Timeout: Unlimited 00:30:05.795 Command Set Identifier: NVM (00h) 00:30:05.795 Deallocate: Supported 00:30:05.795 Deallocated/Unwritten Error: Not Supported 00:30:05.795 Deallocated Read Value: Unknown 00:30:05.795 Deallocate in Write Zeroes: Not Supported 00:30:05.795 Deallocated Guard Field: 0xFFFF 00:30:05.795 Flush: Supported 00:30:05.795 Reservation: Supported 00:30:05.795 Namespace Sharing Capabilities: Multiple Controllers 00:30:05.795 Size (in LBAs): 131072 (0GiB) 00:30:05.795 Capacity (in LBAs): 131072 (0GiB) 00:30:05.795 Utilization (in LBAs): 131072 (0GiB) 00:30:05.795 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:05.795 EUI64: ABCDEF0123456789 00:30:05.795 UUID: 99515b98-6863-437e-8275-ef8ac80d1076 00:30:05.795 Thin Provisioning: Not Supported 00:30:05.795 Per-NS Atomic Units: Yes 00:30:05.795 Atomic Boundary Size (Normal): 0 00:30:05.795 Atomic Boundary Size (PFail): 0 00:30:05.795 Atomic Boundary Offset: 0 00:30:05.795 Maximum Single Source Range Length: 65535 00:30:05.795 Maximum Copy Length: 65535 00:30:05.795 Maximum Source Range Count: 1 00:30:05.795 NGUID/EUI64 Never Reused: No 00:30:05.795 Namespace Write Protected: No 00:30:05.795 Number of LBA Formats: 1 00:30:05.795 Current LBA Format: LBA Format #00 00:30:05.795 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:05.795 00:30:05.795 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:05.795 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:05.795 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:05.795 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:05.795 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:05.795 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:05.795 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:05.795 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:05.795 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:05.795 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:05.795 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:05.795 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:05.796 rmmod nvme_tcp 00:30:05.796 rmmod nvme_fabrics 00:30:05.796 rmmod nvme_keyring 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1118134 ']' 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1118134 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1118134 ']' 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1118134 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1118134 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1118134' 00:30:05.796 killing process with pid 1118134 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1118134 00:30:05.796 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1118134 00:30:06.055 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:06.055 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:06.055 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:06.055 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:06.055 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:06.055 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:06.055 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:06.055 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:06.055 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:06.055 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:06.055 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:06.055 02:51:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.961 02:51:38 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:07.961 00:30:07.961 real 0m9.224s 00:30:07.961 user 0m5.278s 00:30:07.961 sys 0m4.772s 00:30:07.961 02:51:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:07.961 02:51:38 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:07.961 ************************************ 00:30:07.961 END TEST nvmf_identify 00:30:07.961 ************************************ 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:08.222 ************************************ 00:30:08.222 START TEST nvmf_perf 00:30:08.222 ************************************ 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:08.222 * Looking for test storage... 00:30:08.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:08.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.222 --rc genhtml_branch_coverage=1 00:30:08.222 --rc genhtml_function_coverage=1 00:30:08.222 --rc genhtml_legend=1 00:30:08.222 --rc geninfo_all_blocks=1 00:30:08.222 --rc geninfo_unexecuted_blocks=1 00:30:08.222 00:30:08.222 ' 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:08.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.222 --rc genhtml_branch_coverage=1 00:30:08.222 --rc genhtml_function_coverage=1 00:30:08.222 --rc genhtml_legend=1 00:30:08.222 --rc geninfo_all_blocks=1 00:30:08.222 --rc geninfo_unexecuted_blocks=1 00:30:08.222 00:30:08.222 ' 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:08.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.222 --rc genhtml_branch_coverage=1 00:30:08.222 --rc genhtml_function_coverage=1 00:30:08.222 --rc genhtml_legend=1 00:30:08.222 --rc geninfo_all_blocks=1 00:30:08.222 --rc geninfo_unexecuted_blocks=1 00:30:08.222 00:30:08.222 ' 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:08.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:08.222 --rc genhtml_branch_coverage=1 00:30:08.222 --rc genhtml_function_coverage=1 00:30:08.222 --rc genhtml_legend=1 00:30:08.222 --rc geninfo_all_blocks=1 00:30:08.222 --rc geninfo_unexecuted_blocks=1 00:30:08.222 00:30:08.222 ' 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.222 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.482 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:08.482 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:08.482 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:08.482 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:08.482 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:08.482 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:08.482 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:08.482 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:08.482 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:08.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:08.483 02:51:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.052 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:15.053 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:15.053 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:15.053 Found net devices under 0000:af:00.0: cvl_0_0 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:15.053 Found net devices under 0000:af:00.1: cvl_0_1 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:15.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:30:15.053 00:30:15.053 --- 10.0.0.2 ping statistics --- 00:30:15.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.053 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:30:15.053 00:30:15.053 --- 10.0.0.1 ping statistics --- 00:30:15.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.053 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1121677 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1121677 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1121677 ']' 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.053 02:51:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:15.053 [2024-12-16 02:51:44.827278] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:15.053 [2024-12-16 02:51:44.827323] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.053 [2024-12-16 02:51:44.906588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:15.053 [2024-12-16 02:51:44.929347] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.053 [2024-12-16 02:51:44.929385] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.053 [2024-12-16 02:51:44.929392] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.053 [2024-12-16 02:51:44.929398] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.053 [2024-12-16 02:51:44.929402] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.053 [2024-12-16 02:51:44.930837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.053 [2024-12-16 02:51:44.930949] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:15.053 [2024-12-16 02:51:44.930982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.053 [2024-12-16 02:51:44.930983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:15.053 02:51:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.053 02:51:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:15.053 02:51:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.053 02:51:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:15.054 02:51:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:15.054 02:51:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.054 02:51:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:15.054 02:51:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:17.581 02:51:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:17.581 02:51:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:17.839 02:51:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:30:17.839 02:51:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:18.097 02:51:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:18.097 02:51:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:30:18.097 02:51:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:18.097 02:51:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:18.097 02:51:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:18.097 [2024-12-16 02:51:48.691036] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.097 02:51:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.355 02:51:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:18.355 02:51:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:18.613 02:51:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:18.613 02:51:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:18.871 02:51:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:18.871 [2024-12-16 02:51:49.484545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:18.871 02:51:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:19.129 02:51:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:30:19.129 02:51:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:19.129 02:51:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:19.129 02:51:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:20.502 Initializing NVMe Controllers 00:30:20.502 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:30:20.502 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:30:20.502 Initialization complete. Launching workers. 00:30:20.502 ======================================================== 00:30:20.502 Latency(us) 00:30:20.502 Device Information : IOPS MiB/s Average min max 00:30:20.502 PCIE (0000:5e:00.0) NSID 1 from core 0: 99027.15 386.82 322.78 38.88 6199.35 00:30:20.502 ======================================================== 00:30:20.502 Total : 99027.15 386.82 322.78 38.88 6199.35 00:30:20.502 00:30:20.502 02:51:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:21.876 Initializing NVMe Controllers 00:30:21.876 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:21.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:21.876 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:21.876 Initialization complete. Launching workers. 00:30:21.876 ======================================================== 00:30:21.876 Latency(us) 00:30:21.876 Device Information : IOPS MiB/s Average min max 00:30:21.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 87.00 0.34 11870.90 108.58 45761.46 00:30:21.876 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 65.00 0.25 15964.00 7960.79 47904.66 00:30:21.876 ======================================================== 00:30:21.876 Total : 152.00 0.59 13621.24 108.58 47904.66 00:30:21.876 00:30:21.876 02:51:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:23.252 Initializing NVMe Controllers 00:30:23.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:23.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:23.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:23.252 Initialization complete. Launching workers. 00:30:23.252 ======================================================== 00:30:23.252 Latency(us) 00:30:23.252 Device Information : IOPS MiB/s Average min max 00:30:23.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11424.69 44.63 2799.71 435.58 9001.91 00:30:23.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3783.92 14.78 8467.54 6452.20 16442.61 00:30:23.252 ======================================================== 00:30:23.252 Total : 15208.60 59.41 4209.87 435.58 16442.61 00:30:23.252 00:30:23.252 02:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:23.252 02:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:23.252 02:51:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:25.791 Initializing NVMe Controllers 00:30:25.791 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:25.791 Controller IO queue size 128, less than required. 00:30:25.791 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:25.791 Controller IO queue size 128, less than required. 00:30:25.791 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:25.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:25.791 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:25.791 Initialization complete. Launching workers. 00:30:25.791 ======================================================== 00:30:25.791 Latency(us) 00:30:25.791 Device Information : IOPS MiB/s Average min max 00:30:25.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1790.86 447.71 72542.22 50606.55 131076.69 00:30:25.791 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 624.45 156.11 215148.90 71874.24 333323.06 00:30:25.791 ======================================================== 00:30:25.791 Total : 2415.31 603.83 109411.55 50606.55 333323.06 00:30:25.791 00:30:25.791 02:51:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:26.049 No valid NVMe controllers or AIO or URING devices found 00:30:26.049 Initializing NVMe Controllers 00:30:26.049 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:26.050 Controller IO queue size 128, less than required. 00:30:26.050 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:26.050 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:26.050 Controller IO queue size 128, less than required. 00:30:26.050 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:26.050 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:26.050 WARNING: Some requested NVMe devices were skipped 00:30:26.050 02:51:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:28.581 Initializing NVMe Controllers 00:30:28.581 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:28.581 Controller IO queue size 128, less than required. 00:30:28.581 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:28.581 Controller IO queue size 128, less than required. 00:30:28.581 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:28.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:28.581 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:28.581 Initialization complete. Launching workers. 00:30:28.581 00:30:28.581 ==================== 00:30:28.581 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:28.581 TCP transport: 00:30:28.581 polls: 11283 00:30:28.581 idle_polls: 7920 00:30:28.581 sock_completions: 3363 00:30:28.581 nvme_completions: 6293 00:30:28.581 submitted_requests: 9454 00:30:28.581 queued_requests: 1 00:30:28.581 00:30:28.581 ==================== 00:30:28.581 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:28.581 TCP transport: 00:30:28.581 polls: 11829 00:30:28.581 idle_polls: 7957 00:30:28.581 sock_completions: 3872 00:30:28.581 nvme_completions: 7029 00:30:28.581 submitted_requests: 10614 00:30:28.581 queued_requests: 1 00:30:28.581 ======================================================== 00:30:28.581 Latency(us) 00:30:28.581 Device Information : IOPS MiB/s Average min max 00:30:28.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1570.23 392.56 83463.05 48163.24 129334.78 00:30:28.581 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1753.91 438.48 74064.21 43445.29 134219.84 00:30:28.581 ======================================================== 00:30:28.581 Total : 3324.13 831.03 78503.97 43445.29 134219.84 00:30:28.581 00:30:28.581 02:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:28.581 02:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:28.839 02:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:28.839 02:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:30:28.839 02:51:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=df3bd399-1f79-4577-bb7f-87117473a619 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb df3bd399-1f79-4577-bb7f-87117473a619 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=df3bd399-1f79-4577-bb7f-87117473a619 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:32.270 { 00:30:32.270 "uuid": "df3bd399-1f79-4577-bb7f-87117473a619", 00:30:32.270 "name": "lvs_0", 00:30:32.270 "base_bdev": "Nvme0n1", 00:30:32.270 "total_data_clusters": 238234, 00:30:32.270 "free_clusters": 238234, 00:30:32.270 "block_size": 512, 00:30:32.270 "cluster_size": 4194304 00:30:32.270 } 00:30:32.270 ]' 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="df3bd399-1f79-4577-bb7f-87117473a619") .free_clusters' 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="df3bd399-1f79-4577-bb7f-87117473a619") .cluster_size' 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:32.270 952936 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:32.270 02:52:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u df3bd399-1f79-4577-bb7f-87117473a619 lbd_0 20480 00:30:32.834 02:52:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=c398d319-5a21-46ee-a662-e2cd2a23155e 00:30:32.834 02:52:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore c398d319-5a21-46ee-a662-e2cd2a23155e lvs_n_0 00:30:33.400 02:52:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=68136b9d-4f38-4a3d-8cfa-061069dd2b44 00:30:33.400 02:52:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 68136b9d-4f38-4a3d-8cfa-061069dd2b44 00:30:33.400 02:52:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=68136b9d-4f38-4a3d-8cfa-061069dd2b44 00:30:33.400 02:52:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:33.400 02:52:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:33.400 02:52:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:33.400 02:52:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:33.657 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:33.657 { 00:30:33.657 "uuid": "df3bd399-1f79-4577-bb7f-87117473a619", 00:30:33.657 "name": "lvs_0", 00:30:33.657 "base_bdev": "Nvme0n1", 00:30:33.657 "total_data_clusters": 238234, 00:30:33.657 "free_clusters": 233114, 00:30:33.657 "block_size": 512, 00:30:33.657 "cluster_size": 4194304 00:30:33.657 }, 00:30:33.657 { 00:30:33.657 "uuid": "68136b9d-4f38-4a3d-8cfa-061069dd2b44", 00:30:33.657 "name": "lvs_n_0", 00:30:33.657 "base_bdev": "c398d319-5a21-46ee-a662-e2cd2a23155e", 00:30:33.657 "total_data_clusters": 5114, 00:30:33.657 "free_clusters": 5114, 00:30:33.657 "block_size": 512, 00:30:33.657 "cluster_size": 4194304 00:30:33.657 } 00:30:33.657 ]' 00:30:33.657 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="68136b9d-4f38-4a3d-8cfa-061069dd2b44") .free_clusters' 00:30:33.657 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:33.657 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="68136b9d-4f38-4a3d-8cfa-061069dd2b44") .cluster_size' 00:30:33.657 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:33.657 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:33.657 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:33.657 20456 00:30:33.657 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:33.658 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 68136b9d-4f38-4a3d-8cfa-061069dd2b44 lbd_nest_0 20456 00:30:33.915 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=bb8b3730-fa48-41fc-bf9b-53d38eb2d23e 00:30:33.915 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:34.173 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:34.173 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bb8b3730-fa48-41fc-bf9b-53d38eb2d23e 00:30:34.431 02:52:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:34.431 02:52:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:34.431 02:52:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:34.431 02:52:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:34.431 02:52:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:34.431 02:52:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:46.627 Initializing NVMe Controllers 00:30:46.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:46.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:46.627 Initialization complete. Launching workers. 00:30:46.627 ======================================================== 00:30:46.627 Latency(us) 00:30:46.627 Device Information : IOPS MiB/s Average min max 00:30:46.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 47.38 0.02 21119.58 127.57 45734.07 00:30:46.627 ======================================================== 00:30:46.627 Total : 47.38 0.02 21119.58 127.57 45734.07 00:30:46.627 00:30:46.627 02:52:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:46.627 02:52:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:56.601 Initializing NVMe Controllers 00:30:56.601 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:56.601 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:56.601 Initialization complete. Launching workers. 00:30:56.601 ======================================================== 00:30:56.601 Latency(us) 00:30:56.601 Device Information : IOPS MiB/s Average min max 00:30:56.601 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 67.30 8.41 14866.32 4036.40 55815.97 00:30:56.601 ======================================================== 00:30:56.601 Total : 67.30 8.41 14866.32 4036.40 55815.97 00:30:56.601 00:30:56.601 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:56.601 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:56.601 02:52:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:06.575 Initializing NVMe Controllers 00:31:06.575 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:06.575 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:06.575 Initialization complete. Launching workers. 00:31:06.575 ======================================================== 00:31:06.575 Latency(us) 00:31:06.575 Device Information : IOPS MiB/s Average min max 00:31:06.575 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8666.42 4.23 3691.94 243.01 10121.46 00:31:06.575 ======================================================== 00:31:06.575 Total : 8666.42 4.23 3691.94 243.01 10121.46 00:31:06.575 00:31:06.575 02:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:06.575 02:52:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:16.552 Initializing NVMe Controllers 00:31:16.552 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:16.552 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:16.552 Initialization complete. Launching workers. 00:31:16.552 ======================================================== 00:31:16.552 Latency(us) 00:31:16.552 Device Information : IOPS MiB/s Average min max 00:31:16.552 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4377.21 547.15 7310.98 473.27 17321.23 00:31:16.552 ======================================================== 00:31:16.552 Total : 4377.21 547.15 7310.98 473.27 17321.23 00:31:16.552 00:31:16.552 02:52:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:16.552 02:52:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:16.552 02:52:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:26.531 Initializing NVMe Controllers 00:31:26.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:26.531 Controller IO queue size 128, less than required. 00:31:26.531 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:26.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:26.531 Initialization complete. Launching workers. 00:31:26.531 ======================================================== 00:31:26.531 Latency(us) 00:31:26.531 Device Information : IOPS MiB/s Average min max 00:31:26.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15883.97 7.76 8058.17 1356.77 22698.43 00:31:26.531 ======================================================== 00:31:26.531 Total : 15883.97 7.76 8058.17 1356.77 22698.43 00:31:26.531 00:31:26.531 02:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:26.531 02:52:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:38.736 Initializing NVMe Controllers 00:31:38.736 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:38.736 Controller IO queue size 128, less than required. 00:31:38.736 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:38.736 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:38.736 Initialization complete. Launching workers. 00:31:38.736 ======================================================== 00:31:38.736 Latency(us) 00:31:38.736 Device Information : IOPS MiB/s Average min max 00:31:38.736 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1179.28 147.41 108796.29 15412.21 211505.03 00:31:38.736 ======================================================== 00:31:38.736 Total : 1179.28 147.41 108796.29 15412.21 211505.03 00:31:38.736 00:31:38.736 02:53:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:38.736 02:53:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bb8b3730-fa48-41fc-bf9b-53d38eb2d23e 00:31:38.736 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c398d319-5a21-46ee-a662-e2cd2a23155e 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:38.737 rmmod nvme_tcp 00:31:38.737 rmmod nvme_fabrics 00:31:38.737 rmmod nvme_keyring 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1121677 ']' 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1121677 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1121677 ']' 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1121677 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1121677 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1121677' 00:31:38.737 killing process with pid 1121677 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1121677 00:31:38.737 02:53:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1121677 00:31:40.112 02:53:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:40.112 02:53:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:40.112 02:53:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:40.112 02:53:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:40.112 02:53:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:40.112 02:53:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:40.112 02:53:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:40.112 02:53:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:40.112 02:53:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:40.112 02:53:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.112 02:53:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:40.112 02:53:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.016 02:53:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:42.016 00:31:42.016 real 1m33.772s 00:31:42.016 user 5m34.290s 00:31:42.016 sys 0m17.369s 00:31:42.016 02:53:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:42.017 ************************************ 00:31:42.017 END TEST nvmf_perf 00:31:42.017 ************************************ 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:42.017 ************************************ 00:31:42.017 START TEST nvmf_fio_host 00:31:42.017 ************************************ 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:42.017 * Looking for test storage... 00:31:42.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:42.017 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:42.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.277 --rc genhtml_branch_coverage=1 00:31:42.277 --rc genhtml_function_coverage=1 00:31:42.277 --rc genhtml_legend=1 00:31:42.277 --rc geninfo_all_blocks=1 00:31:42.277 --rc geninfo_unexecuted_blocks=1 00:31:42.277 00:31:42.277 ' 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:42.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.277 --rc genhtml_branch_coverage=1 00:31:42.277 --rc genhtml_function_coverage=1 00:31:42.277 --rc genhtml_legend=1 00:31:42.277 --rc geninfo_all_blocks=1 00:31:42.277 --rc geninfo_unexecuted_blocks=1 00:31:42.277 00:31:42.277 ' 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:42.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.277 --rc genhtml_branch_coverage=1 00:31:42.277 --rc genhtml_function_coverage=1 00:31:42.277 --rc genhtml_legend=1 00:31:42.277 --rc geninfo_all_blocks=1 00:31:42.277 --rc geninfo_unexecuted_blocks=1 00:31:42.277 00:31:42.277 ' 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:42.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:42.277 --rc genhtml_branch_coverage=1 00:31:42.277 --rc genhtml_function_coverage=1 00:31:42.277 --rc genhtml_legend=1 00:31:42.277 --rc geninfo_all_blocks=1 00:31:42.277 --rc geninfo_unexecuted_blocks=1 00:31:42.277 00:31:42.277 ' 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.277 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:42.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:42.278 02:53:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:48.841 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:48.842 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:48.842 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:48.842 Found net devices under 0000:af:00.0: cvl_0_0 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:48.842 Found net devices under 0000:af:00.1: cvl_0_1 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:48.842 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.842 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:31:48.842 00:31:48.842 --- 10.0.0.2 ping statistics --- 00:31:48.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.842 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:48.842 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.842 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:31:48.842 00:31:48.842 --- 10.0.0.1 ping statistics --- 00:31:48.842 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.842 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1138572 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1138572 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1138572 ']' 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.842 [2024-12-16 02:53:18.706147] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:31:48.842 [2024-12-16 02:53:18.706196] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.842 [2024-12-16 02:53:18.788374] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:48.842 [2024-12-16 02:53:18.811710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.842 [2024-12-16 02:53:18.811752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.842 [2024-12-16 02:53:18.811760] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.842 [2024-12-16 02:53:18.811767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.842 [2024-12-16 02:53:18.811773] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.842 [2024-12-16 02:53:18.813214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:48.842 [2024-12-16 02:53:18.813271] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:48.842 [2024-12-16 02:53:18.813379] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:48.842 [2024-12-16 02:53:18.813380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:48.842 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:48.843 02:53:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:48.843 [2024-12-16 02:53:19.093613] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:48.843 02:53:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:48.843 02:53:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:48.843 02:53:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:48.843 02:53:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:48.843 Malloc1 00:31:48.843 02:53:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:49.102 02:53:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:49.361 02:53:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:49.361 [2024-12-16 02:53:19.932834] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.361 02:53:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:49.620 02:53:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:49.879 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:49.879 fio-3.35 00:31:49.879 Starting 1 thread 00:31:52.413 00:31:52.413 test: (groupid=0, jobs=1): err= 0: pid=1139149: Mon Dec 16 02:53:22 2024 00:31:52.413 read: IOPS=11.8k, BW=46.2MiB/s (48.5MB/s)(92.7MiB/2005msec) 00:31:52.413 slat (nsec): min=1535, max=241761, avg=1721.93, stdev=2251.50 00:31:52.413 clat (usec): min=3091, max=9835, avg=5965.98, stdev=425.93 00:31:52.413 lat (usec): min=3129, max=9837, avg=5967.70, stdev=425.74 00:31:52.413 clat percentiles (usec): 00:31:52.413 | 1.00th=[ 4948], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 00:31:52.413 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6063], 00:31:52.413 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:31:52.413 | 99.00th=[ 6915], 99.50th=[ 6980], 99.90th=[ 7832], 99.95th=[ 8979], 00:31:52.413 | 99.99th=[ 9765] 00:31:52.413 bw ( KiB/s): min=46000, max=48104, per=99.98%, avg=47334.00, stdev=968.55, samples=4 00:31:52.413 iops : min=11500, max=12026, avg=11833.50, stdev=242.14, samples=4 00:31:52.413 write: IOPS=11.8k, BW=46.0MiB/s (48.2MB/s)(92.3MiB/2005msec); 0 zone resets 00:31:52.413 slat (nsec): min=1574, max=226265, avg=1780.57, stdev=1652.84 00:31:52.413 clat (usec): min=2450, max=9252, avg=4807.63, stdev=369.92 00:31:52.413 lat (usec): min=2465, max=9254, avg=4809.41, stdev=369.82 00:31:52.413 clat percentiles (usec): 00:31:52.413 | 1.00th=[ 4015], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:31:52.413 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:31:52.413 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:31:52.413 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7832], 99.95th=[ 8979], 00:31:52.413 | 99.99th=[ 9241] 00:31:52.413 bw ( KiB/s): min=46512, max=47616, per=99.99%, avg=47110.00, stdev=488.08, samples=4 00:31:52.413 iops : min=11628, max=11904, avg=11777.50, stdev=122.02, samples=4 00:31:52.413 lat (msec) : 4=0.55%, 10=99.45% 00:31:52.413 cpu : usr=72.80%, sys=26.10%, ctx=146, majf=0, minf=3 00:31:52.413 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:52.413 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:52.413 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:52.413 issued rwts: total=23732,23617,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:52.413 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:52.413 00:31:52.413 Run status group 0 (all jobs): 00:31:52.413 READ: bw=46.2MiB/s (48.5MB/s), 46.2MiB/s-46.2MiB/s (48.5MB/s-48.5MB/s), io=92.7MiB (97.2MB), run=2005-2005msec 00:31:52.413 WRITE: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=92.3MiB (96.7MB), run=2005-2005msec 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:52.413 02:53:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:52.672 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:52.672 fio-3.35 00:31:52.672 Starting 1 thread 00:31:55.206 00:31:55.206 test: (groupid=0, jobs=1): err= 0: pid=1139708: Mon Dec 16 02:53:25 2024 00:31:55.206 read: IOPS=10.8k, BW=169MiB/s (177MB/s)(338MiB/2007msec) 00:31:55.206 slat (nsec): min=2483, max=81449, avg=2760.00, stdev=1187.03 00:31:55.206 clat (usec): min=1599, max=51581, avg=6945.05, stdev=3519.86 00:31:55.206 lat (usec): min=1602, max=51584, avg=6947.81, stdev=3519.92 00:31:55.206 clat percentiles (usec): 00:31:55.206 | 1.00th=[ 3589], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5276], 00:31:55.206 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7177], 00:31:55.206 | 70.00th=[ 7504], 80.00th=[ 8029], 90.00th=[ 8717], 95.00th=[ 9503], 00:31:55.206 | 99.00th=[11600], 99.50th=[44827], 99.90th=[50070], 99.95th=[50594], 00:31:55.206 | 99.99th=[51643] 00:31:55.206 bw ( KiB/s): min=76032, max=97280, per=50.20%, avg=86648.00, stdev=8686.51, samples=4 00:31:55.206 iops : min= 4752, max= 6080, avg=5415.50, stdev=542.91, samples=4 00:31:55.206 write: IOPS=6384, BW=99.8MiB/s (105MB/s)(177MiB/1776msec); 0 zone resets 00:31:55.206 slat (usec): min=28, max=305, avg=31.02, stdev= 6.22 00:31:55.206 clat (usec): min=3621, max=14834, avg=8548.18, stdev=1455.51 00:31:55.206 lat (usec): min=3659, max=14864, avg=8579.20, stdev=1456.53 00:31:55.206 clat percentiles (usec): 00:31:55.206 | 1.00th=[ 5735], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7308], 00:31:55.206 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8717], 00:31:55.206 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11338], 00:31:55.206 | 99.00th=[12387], 99.50th=[12911], 99.90th=[14222], 99.95th=[14484], 00:31:55.206 | 99.99th=[14746] 00:31:55.206 bw ( KiB/s): min=79584, max=101376, per=88.22%, avg=90120.00, stdev=8906.41, samples=4 00:31:55.206 iops : min= 4974, max= 6336, avg=5632.50, stdev=556.65, samples=4 00:31:55.206 lat (msec) : 2=0.05%, 4=2.00%, 10=90.50%, 20=7.06%, 50=0.31% 00:31:55.206 lat (msec) : 100=0.08% 00:31:55.206 cpu : usr=86.34%, sys=12.86%, ctx=50, majf=0, minf=3 00:31:55.206 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:55.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:55.206 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:55.206 issued rwts: total=21653,11339,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:55.206 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:55.206 00:31:55.206 Run status group 0 (all jobs): 00:31:55.206 READ: bw=169MiB/s (177MB/s), 169MiB/s-169MiB/s (177MB/s-177MB/s), io=338MiB (355MB), run=2007-2007msec 00:31:55.206 WRITE: bw=99.8MiB/s (105MB/s), 99.8MiB/s-99.8MiB/s (105MB/s-105MB/s), io=177MiB (186MB), run=1776-1776msec 00:31:55.206 02:53:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:55.206 02:53:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:55.206 02:53:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:55.206 02:53:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:55.206 02:53:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:55.206 02:53:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:55.206 02:53:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:55.206 02:53:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:55.206 02:53:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:55.465 02:53:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:55.465 02:53:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:55.465 02:53:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:58.753 Nvme0n1 00:31:58.753 02:53:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:01.338 02:53:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=1d0b7965-f6b4-4403-b24f-9a3c767d4441 00:32:01.338 02:53:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 1d0b7965-f6b4-4403-b24f-9a3c767d4441 00:32:01.338 02:53:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=1d0b7965-f6b4-4403-b24f-9a3c767d4441 00:32:01.338 02:53:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:01.338 02:53:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:01.338 02:53:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:01.338 02:53:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:01.338 02:53:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:01.338 { 00:32:01.338 "uuid": "1d0b7965-f6b4-4403-b24f-9a3c767d4441", 00:32:01.338 "name": "lvs_0", 00:32:01.339 "base_bdev": "Nvme0n1", 00:32:01.339 "total_data_clusters": 930, 00:32:01.339 "free_clusters": 930, 00:32:01.339 "block_size": 512, 00:32:01.339 "cluster_size": 1073741824 00:32:01.339 } 00:32:01.339 ]' 00:32:01.339 02:53:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="1d0b7965-f6b4-4403-b24f-9a3c767d4441") .free_clusters' 00:32:01.598 02:53:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:01.598 02:53:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="1d0b7965-f6b4-4403-b24f-9a3c767d4441") .cluster_size' 00:32:01.598 02:53:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:01.598 02:53:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:01.598 02:53:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:01.598 952320 00:32:01.598 02:53:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:01.856 9318dae1-e625-404a-b0e9-9d20418e8824 00:32:01.856 02:53:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:02.115 02:53:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:02.374 02:53:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:02.374 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:02.375 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:02.375 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:02.375 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:02.375 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:02.375 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:02.375 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:02.375 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:02.375 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:02.375 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:02.375 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:02.375 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:02.652 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:02.652 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:02.652 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:02.652 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:02.652 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:02.652 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:02.652 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:02.652 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:02.652 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:02.652 02:53:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:02.910 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:02.910 fio-3.35 00:32:02.910 Starting 1 thread 00:32:05.437 00:32:05.437 test: (groupid=0, jobs=1): err= 0: pid=1141416: Mon Dec 16 02:53:35 2024 00:32:05.437 read: IOPS=8169, BW=31.9MiB/s (33.5MB/s)(64.0MiB/2006msec) 00:32:05.437 slat (nsec): min=1495, max=91278, avg=1644.56, stdev=1057.34 00:32:05.437 clat (usec): min=851, max=169768, avg=8613.19, stdev=10199.87 00:32:05.437 lat (usec): min=853, max=169792, avg=8614.83, stdev=10200.01 00:32:05.437 clat percentiles (msec): 00:32:05.437 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:32:05.437 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 8], 60.00th=[ 9], 00:32:05.437 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:32:05.437 | 99.00th=[ 10], 99.50th=[ 12], 99.90th=[ 169], 99.95th=[ 169], 00:32:05.437 | 99.99th=[ 169] 00:32:05.437 bw ( KiB/s): min=23328, max=35960, per=99.85%, avg=32630.00, stdev=6203.67, samples=4 00:32:05.437 iops : min= 5832, max= 8990, avg=8157.50, stdev=1550.92, samples=4 00:32:05.437 write: IOPS=8163, BW=31.9MiB/s (33.4MB/s)(64.0MiB/2006msec); 0 zone resets 00:32:05.437 slat (nsec): min=1527, max=80479, avg=1709.34, stdev=774.49 00:32:05.437 clat (usec): min=278, max=168382, avg=6991.24, stdev=9521.02 00:32:05.437 lat (usec): min=280, max=168386, avg=6992.95, stdev=9521.18 00:32:05.437 clat percentiles (msec): 00:32:05.437 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:32:05.437 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:32:05.437 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:32:05.437 | 99.00th=[ 8], 99.50th=[ 10], 99.90th=[ 167], 99.95th=[ 169], 00:32:05.437 | 99.99th=[ 169] 00:32:05.437 bw ( KiB/s): min=24488, max=35488, per=99.97%, avg=32642.00, stdev=5437.32, samples=4 00:32:05.437 iops : min= 6122, max= 8872, avg=8160.50, stdev=1359.33, samples=4 00:32:05.437 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:05.437 lat (msec) : 2=0.04%, 4=0.24%, 10=99.16%, 20=0.15%, 250=0.39% 00:32:05.437 cpu : usr=72.22%, sys=27.03%, ctx=70, majf=0, minf=3 00:32:05.437 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:05.437 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.437 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:05.437 issued rwts: total=16388,16375,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.437 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:05.437 00:32:05.437 Run status group 0 (all jobs): 00:32:05.437 READ: bw=31.9MiB/s (33.5MB/s), 31.9MiB/s-31.9MiB/s (33.5MB/s-33.5MB/s), io=64.0MiB (67.1MB), run=2006-2006msec 00:32:05.437 WRITE: bw=31.9MiB/s (33.4MB/s), 31.9MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=64.0MiB (67.1MB), run=2006-2006msec 00:32:05.437 02:53:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:05.437 02:53:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:06.809 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=f82e0c0f-4b50-4e2c-a595-45691a93d9d3 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb f82e0c0f-4b50-4e2c-a595-45691a93d9d3 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=f82e0c0f-4b50-4e2c-a595-45691a93d9d3 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:06.810 { 00:32:06.810 "uuid": "1d0b7965-f6b4-4403-b24f-9a3c767d4441", 00:32:06.810 "name": "lvs_0", 00:32:06.810 "base_bdev": "Nvme0n1", 00:32:06.810 "total_data_clusters": 930, 00:32:06.810 "free_clusters": 0, 00:32:06.810 "block_size": 512, 00:32:06.810 "cluster_size": 1073741824 00:32:06.810 }, 00:32:06.810 { 00:32:06.810 "uuid": "f82e0c0f-4b50-4e2c-a595-45691a93d9d3", 00:32:06.810 "name": "lvs_n_0", 00:32:06.810 "base_bdev": "9318dae1-e625-404a-b0e9-9d20418e8824", 00:32:06.810 "total_data_clusters": 237847, 00:32:06.810 "free_clusters": 237847, 00:32:06.810 "block_size": 512, 00:32:06.810 "cluster_size": 4194304 00:32:06.810 } 00:32:06.810 ]' 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="f82e0c0f-4b50-4e2c-a595-45691a93d9d3") .free_clusters' 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="f82e0c0f-4b50-4e2c-a595-45691a93d9d3") .cluster_size' 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:06.810 951388 00:32:06.810 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:07.375 3d575335-3f2c-45fa-a1c3-440a61cf0b96 00:32:07.375 02:53:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:07.633 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:07.890 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.148 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:08.149 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:08.149 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:08.149 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:08.149 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:08.149 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:08.149 02:53:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:08.408 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:08.408 fio-3.35 00:32:08.408 Starting 1 thread 00:32:10.935 00:32:10.935 test: (groupid=0, jobs=1): err= 0: pid=1142432: Mon Dec 16 02:53:41 2024 00:32:10.935 read: IOPS=7868, BW=30.7MiB/s (32.2MB/s)(61.7MiB/2007msec) 00:32:10.935 slat (nsec): min=1510, max=104835, avg=1669.48, stdev=1170.63 00:32:10.935 clat (usec): min=3008, max=14082, avg=8949.60, stdev=793.75 00:32:10.935 lat (usec): min=3013, max=14083, avg=8951.27, stdev=793.69 00:32:10.935 clat percentiles (usec): 00:32:10.935 | 1.00th=[ 7046], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8291], 00:32:10.935 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:32:10.935 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:32:10.935 | 99.00th=[10683], 99.50th=[10945], 99.90th=[11863], 99.95th=[12911], 00:32:10.935 | 99.99th=[14091] 00:32:10.935 bw ( KiB/s): min=30440, max=32032, per=99.92%, avg=31448.00, stdev=695.43, samples=4 00:32:10.935 iops : min= 7610, max= 8008, avg=7862.00, stdev=173.86, samples=4 00:32:10.935 write: IOPS=7842, BW=30.6MiB/s (32.1MB/s)(61.5MiB/2007msec); 0 zone resets 00:32:10.935 slat (nsec): min=1540, max=91326, avg=1738.03, stdev=818.06 00:32:10.935 clat (usec): min=1455, max=12949, avg=7269.46, stdev=660.53 00:32:10.935 lat (usec): min=1461, max=12950, avg=7271.20, stdev=660.49 00:32:10.935 clat percentiles (usec): 00:32:10.935 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6783], 00:32:10.935 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7439], 00:32:10.935 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 8029], 95.00th=[ 8291], 00:32:10.935 | 99.00th=[ 8717], 99.50th=[ 9110], 99.90th=[10814], 99.95th=[12649], 00:32:10.935 | 99.99th=[12911] 00:32:10.935 bw ( KiB/s): min=31168, max=31496, per=99.97%, avg=31362.00, stdev=140.76, samples=4 00:32:10.935 iops : min= 7792, max= 7874, avg=7840.50, stdev=35.19, samples=4 00:32:10.935 lat (msec) : 2=0.01%, 4=0.11%, 10=95.73%, 20=4.15% 00:32:10.935 cpu : usr=72.63%, sys=26.47%, ctx=128, majf=0, minf=3 00:32:10.935 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:10.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:10.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:10.935 issued rwts: total=15792,15740,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:10.935 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:10.935 00:32:10.935 Run status group 0 (all jobs): 00:32:10.935 READ: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=61.7MiB (64.7MB), run=2007-2007msec 00:32:10.935 WRITE: bw=30.6MiB/s (32.1MB/s), 30.6MiB/s-30.6MiB/s (32.1MB/s-32.1MB/s), io=61.5MiB (64.5MB), run=2007-2007msec 00:32:10.935 02:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:10.935 02:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:10.935 02:53:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:15.118 02:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:15.118 02:53:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:17.644 02:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:17.902 02:53:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:19.802 rmmod nvme_tcp 00:32:19.802 rmmod nvme_fabrics 00:32:19.802 rmmod nvme_keyring 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1138572 ']' 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1138572 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1138572 ']' 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1138572 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1138572 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1138572' 00:32:19.802 killing process with pid 1138572 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1138572 00:32:19.802 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1138572 00:32:20.061 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:20.061 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:20.061 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:20.061 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:20.061 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:20.061 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:20.061 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:20.061 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.061 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:20.061 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.061 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.061 02:53:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.598 00:32:22.598 real 0m40.157s 00:32:22.598 user 2m41.146s 00:32:22.598 sys 0m8.887s 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.598 ************************************ 00:32:22.598 END TEST nvmf_fio_host 00:32:22.598 ************************************ 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.598 ************************************ 00:32:22.598 START TEST nvmf_failover 00:32:22.598 ************************************ 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:22.598 * Looking for test storage... 00:32:22.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:22.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.598 --rc genhtml_branch_coverage=1 00:32:22.598 --rc genhtml_function_coverage=1 00:32:22.598 --rc genhtml_legend=1 00:32:22.598 --rc geninfo_all_blocks=1 00:32:22.598 --rc geninfo_unexecuted_blocks=1 00:32:22.598 00:32:22.598 ' 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:22.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.598 --rc genhtml_branch_coverage=1 00:32:22.598 --rc genhtml_function_coverage=1 00:32:22.598 --rc genhtml_legend=1 00:32:22.598 --rc geninfo_all_blocks=1 00:32:22.598 --rc geninfo_unexecuted_blocks=1 00:32:22.598 00:32:22.598 ' 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:22.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.598 --rc genhtml_branch_coverage=1 00:32:22.598 --rc genhtml_function_coverage=1 00:32:22.598 --rc genhtml_legend=1 00:32:22.598 --rc geninfo_all_blocks=1 00:32:22.598 --rc geninfo_unexecuted_blocks=1 00:32:22.598 00:32:22.598 ' 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:22.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.598 --rc genhtml_branch_coverage=1 00:32:22.598 --rc genhtml_function_coverage=1 00:32:22.598 --rc genhtml_legend=1 00:32:22.598 --rc geninfo_all_blocks=1 00:32:22.598 --rc geninfo_unexecuted_blocks=1 00:32:22.598 00:32:22.598 ' 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.598 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:22.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:22.599 02:53:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:29.170 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:29.170 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:29.170 Found net devices under 0000:af:00.0: cvl_0_0 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:29.170 Found net devices under 0000:af:00.1: cvl_0_1 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:29.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:32:29.170 00:32:29.170 --- 10.0.0.2 ping statistics --- 00:32:29.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.170 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:32:29.170 00:32:29.170 --- 10.0.0.1 ping statistics --- 00:32:29.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.170 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:29.170 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1147676 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1147676 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1147676 ']' 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.171 02:53:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:29.171 [2024-12-16 02:53:58.942395] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:29.171 [2024-12-16 02:53:58.942438] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.171 [2024-12-16 02:53:59.017452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:29.171 [2024-12-16 02:53:59.039255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:29.171 [2024-12-16 02:53:59.039291] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:29.171 [2024-12-16 02:53:59.039298] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:29.171 [2024-12-16 02:53:59.039305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:29.171 [2024-12-16 02:53:59.039310] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:29.171 [2024-12-16 02:53:59.040624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:29.171 [2024-12-16 02:53:59.040733] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.171 [2024-12-16 02:53:59.040735] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:29.171 02:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:29.171 02:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:29.171 02:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:29.171 02:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:29.171 02:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:29.171 02:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.171 02:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:29.171 [2024-12-16 02:53:59.335560] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.171 02:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:29.171 Malloc0 00:32:29.171 02:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:29.171 02:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:29.429 02:53:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:29.687 [2024-12-16 02:54:00.144967] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.687 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:29.687 [2024-12-16 02:54:00.345486] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:29.944 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:29.944 [2024-12-16 02:54:00.554164] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:29.944 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:29.944 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1147967 00:32:29.944 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:29.944 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1147967 /var/tmp/bdevperf.sock 00:32:29.944 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1147967 ']' 00:32:29.944 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:29.944 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.944 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:29.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:29.944 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.944 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:30.202 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:30.202 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:30.202 02:54:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:30.765 NVMe0n1 00:32:30.766 02:54:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:31.023 00:32:31.023 02:54:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1148202 00:32:31.023 02:54:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:31.023 02:54:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:31.956 02:54:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.215 [2024-12-16 02:54:02.748415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748675] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748748] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748781] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748839] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.215 [2024-12-16 02:54:02.748901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748959] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.748996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.749002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.749008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.749014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.749020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.749025] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.749031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.749036] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.749043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.749048] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.749054] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 [2024-12-16 02:54:02.749061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fcac0 is same with the state(6) to be set 00:32:32.216 02:54:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:35.492 02:54:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:35.492 00:32:35.492 02:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:35.750 [2024-12-16 02:54:06.284042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284112] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284141] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284159] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284182] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 [2024-12-16 02:54:06.284296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fd8e0 is same with the state(6) to be set 00:32:35.750 02:54:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:39.029 02:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:39.029 [2024-12-16 02:54:09.503368] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:39.029 02:54:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:39.961 02:54:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:40.219 [2024-12-16 02:54:10.714139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fe690 is same with the state(6) to be set 00:32:40.219 [2024-12-16 02:54:10.714175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fe690 is same with the state(6) to be set 00:32:40.219 [2024-12-16 02:54:10.714184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fe690 is same with the state(6) to be set 00:32:40.219 [2024-12-16 02:54:10.714190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fe690 is same with the state(6) to be set 00:32:40.219 [2024-12-16 02:54:10.714197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fe690 is same with the state(6) to be set 00:32:40.219 [2024-12-16 02:54:10.714204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fe690 is same with the state(6) to be set 00:32:40.219 [2024-12-16 02:54:10.714210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fe690 is same with the state(6) to be set 00:32:40.219 [2024-12-16 02:54:10.714216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12fe690 is same with the state(6) to be set 00:32:40.219 02:54:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1148202 00:32:46.780 { 00:32:46.780 "results": [ 00:32:46.780 { 00:32:46.780 "job": "NVMe0n1", 00:32:46.780 "core_mask": "0x1", 00:32:46.780 "workload": "verify", 00:32:46.780 "status": "finished", 00:32:46.780 "verify_range": { 00:32:46.780 "start": 0, 00:32:46.780 "length": 16384 00:32:46.780 }, 00:32:46.780 "queue_depth": 128, 00:32:46.780 "io_size": 4096, 00:32:46.780 "runtime": 15.006829, 00:32:46.780 "iops": 11065.029127739112, 00:32:46.780 "mibps": 43.222770030230905, 00:32:46.780 "io_failed": 19429, 00:32:46.780 "io_timeout": 0, 00:32:46.780 "avg_latency_us": 10334.601181634267, 00:32:46.780 "min_latency_us": 415.45142857142855, 00:32:46.780 "max_latency_us": 26089.569523809525 00:32:46.780 } 00:32:46.780 ], 00:32:46.780 "core_count": 1 00:32:46.780 } 00:32:46.780 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1147967 00:32:46.780 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1147967 ']' 00:32:46.780 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1147967 00:32:46.780 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:46.780 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.780 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1147967 00:32:46.780 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:46.780 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:46.780 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1147967' 00:32:46.780 killing process with pid 1147967 00:32:46.780 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1147967 00:32:46.780 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1147967 00:32:46.780 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:46.780 [2024-12-16 02:54:00.611063] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:46.780 [2024-12-16 02:54:00.611121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1147967 ] 00:32:46.780 [2024-12-16 02:54:00.685673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:46.780 [2024-12-16 02:54:00.708145] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.780 Running I/O for 15 seconds... 00:32:46.780 11204.00 IOPS, 43.77 MiB/s [2024-12-16T01:54:17.439Z] [2024-12-16 02:54:02.750246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.780 [2024-12-16 02:54:02.750281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-16 02:54:02.750297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.780 [2024-12-16 02:54:02.750305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-16 02:54:02.750314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.780 [2024-12-16 02:54:02.750321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-16 02:54:02.750330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.780 [2024-12-16 02:54:02.750336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-16 02:54:02.750344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.780 [2024-12-16 02:54:02.750351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-16 02:54:02.750360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.780 [2024-12-16 02:54:02.750366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-16 02:54:02.750374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.780 [2024-12-16 02:54:02.750380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-16 02:54:02.750388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.780 [2024-12-16 02:54:02.750394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.780 [2024-12-16 02:54:02.750403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.780 [2024-12-16 02:54:02.750409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.781 [2024-12-16 02:54:02.750803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.781 [2024-12-16 02:54:02.750818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.781 [2024-12-16 02:54:02.750834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.781 [2024-12-16 02:54:02.750854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.781 [2024-12-16 02:54:02.750869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.781 [2024-12-16 02:54:02.750884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.781 [2024-12-16 02:54:02.750898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.781 [2024-12-16 02:54:02.750913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.781 [2024-12-16 02:54:02.750927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.781 [2024-12-16 02:54:02.750941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.781 [2024-12-16 02:54:02.750955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.781 [2024-12-16 02:54:02.750969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.781 [2024-12-16 02:54:02.750984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.750992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.781 [2024-12-16 02:54:02.750998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.781 [2024-12-16 02:54:02.751006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:99104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:99112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:99120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.782 [2024-12-16 02:54:02.751234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:99136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:99152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:99168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:99192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:99200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:99208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:99224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:99232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:99248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:99256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.782 [2024-12-16 02:54:02.751564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.782 [2024-12-16 02:54:02.751572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:99312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.783 [2024-12-16 02:54:02.751587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.783 [2024-12-16 02:54:02.751601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.783 [2024-12-16 02:54:02.751615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.783 [2024-12-16 02:54:02.751629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.783 [2024-12-16 02:54:02.751644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:99352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.783 [2024-12-16 02:54:02.751659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:99360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.783 [2024-12-16 02:54:02.751672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.783 [2024-12-16 02:54:02.751688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.783 [2024-12-16 02:54:02.751702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.751729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99384 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.751735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.751754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.751759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99392 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.751765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.751779] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.751784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99400 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.751791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.751803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.751808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99408 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.751814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.751825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.751830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99416 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.751836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.751853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.751858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99424 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.751865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.751876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.751881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99432 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.751887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.751899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.751904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99440 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.751910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.751921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.751927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99448 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.751933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.751946] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.751952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99456 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.751958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.751971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.751977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99464 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.751982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.751989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.751994] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.751999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99472 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.752005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.752012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.752017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.752022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99480 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.752042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.752049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.752054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.752060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99488 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.752066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.752072] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.752078] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.752083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99496 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.752089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.752095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.752100] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.783 [2024-12-16 02:54:02.752105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99504 len:8 PRP1 0x0 PRP2 0x0 00:32:46.783 [2024-12-16 02:54:02.752112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.783 [2024-12-16 02:54:02.752118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.783 [2024-12-16 02:54:02.752123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.752130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99512 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.752136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.752144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.752148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.752153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99520 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.752162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.752168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.752173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.752178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99528 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.752184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.752190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.752195] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.752200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99536 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.752206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.752212] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.752218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.752223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99544 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.752229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.752235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.752240] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.752245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99552 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.752251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.767357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.767366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99560 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.767376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.767392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.767399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99568 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.767408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767417] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.767424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.767434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99576 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.767443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.767463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.767471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99584 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.767481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.767496] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.767504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99592 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.767513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.767529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.767537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99600 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.767545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.767561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.767570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99608 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.767578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.767593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.767600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99616 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.767609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.767625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.767632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99624 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.767641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.784 [2024-12-16 02:54:02.767657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.784 [2024-12-16 02:54:02.767666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99632 len:8 PRP1 0x0 PRP2 0x0 00:32:46.784 [2024-12-16 02:54:02.767676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767723] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:46.784 [2024-12-16 02:54:02.767753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.784 [2024-12-16 02:54:02.767764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.784 [2024-12-16 02:54:02.767788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.784 [2024-12-16 02:54:02.767807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.784 [2024-12-16 02:54:02.767825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:02.767834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:46.784 [2024-12-16 02:54:02.767875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2e460 (9): Bad file descriptor 00:32:46.784 [2024-12-16 02:54:02.771630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:46.784 [2024-12-16 02:54:02.882788] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:46.784 10527.50 IOPS, 41.12 MiB/s [2024-12-16T01:54:17.443Z] 10844.67 IOPS, 42.36 MiB/s [2024-12-16T01:54:17.443Z] 10979.50 IOPS, 42.89 MiB/s [2024-12-16T01:54:17.443Z] [2024-12-16 02:54:06.285299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:56200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-16 02:54:06.285332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:06.285346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:56208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-16 02:54:06.285354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:06.285363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:56216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-16 02:54:06.285370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:06.285379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:56224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-16 02:54:06.285385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:06.285393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:56232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-16 02:54:06.285401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:06.285410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:56240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-16 02:54:06.285417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:06.285425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:56248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.784 [2024-12-16 02:54:06.285432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.784 [2024-12-16 02:54:06.285440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:56256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:56264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:56272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:56280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:56288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:56304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:56312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:56320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:56328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:56336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:56360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:56368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:56376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:56384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:56392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:56400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:56408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:56416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:56424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:56432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:56440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:56448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:56456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:56464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:56472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:56480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:56488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:56496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.785 [2024-12-16 02:54:06.285912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.785 [2024-12-16 02:54:06.285920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:56568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.285926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.285934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:56576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.285941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.285948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:56584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.285955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.285963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.285969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.285976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:56600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.285983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.285993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:56608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:56624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:56640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:56648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:56656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:56664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:56672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:56688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:56712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:56720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:56728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:56736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:56744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:56760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:56776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:56784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:56792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:56800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:56816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:56824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:56832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:56840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:56856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.786 [2024-12-16 02:54:06.286481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:56872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.786 [2024-12-16 02:54:06.286487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:56880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.787 [2024-12-16 02:54:06.286501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:56888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.787 [2024-12-16 02:54:06.286517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:56896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.787 [2024-12-16 02:54:06.286532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:56904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.787 [2024-12-16 02:54:06.286546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.787 [2024-12-16 02:54:06.286560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:56920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.787 [2024-12-16 02:54:06.286576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:56928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.787 [2024-12-16 02:54:06.286590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.787 [2024-12-16 02:54:06.286604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:56944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.787 [2024-12-16 02:54:06.286618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56952 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.286663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.286678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56960 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.286692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.286704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56968 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.286716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.286732] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56976 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.286745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.286757] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56984 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.286771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.286785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56992 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.286801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.286814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57000 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.286826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.286837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57008 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.286853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.286865] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57016 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.286877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.286890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57024 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.286902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.286913] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57032 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.286925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.286936] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57040 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.286947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.286959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57048 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.286972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.286979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.286984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.286994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57056 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.287000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.287007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.287012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.287017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57064 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.287024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.287031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.287036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.287042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57072 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.287048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.287055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.287060] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.287065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57080 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.287071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.287078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.787 [2024-12-16 02:54:06.287083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.787 [2024-12-16 02:54:06.287089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57088 len:8 PRP1 0x0 PRP2 0x0 00:32:46.787 [2024-12-16 02:54:06.287095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.787 [2024-12-16 02:54:06.287101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.287106] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.287116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57096 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.287122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.287129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.287133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.287139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57104 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.287145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.287152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.287158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.287163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57112 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.287169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.287176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.287182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.287187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57120 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.287194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.287202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.287206] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.287212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57128 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.287218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.287224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.287229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.287234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57136 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.287240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.287246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.287252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.287257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57144 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.287264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.287270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.287275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.287280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57152 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.287286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.287292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.287296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.287303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57160 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.287310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.287316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.287321] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.287326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57168 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.287332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.287338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.287344] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.297537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57176 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.297548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.297558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.297563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.297569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57184 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.297576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.297582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.297587] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.297592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57192 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.297598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.297605] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.297611] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.297616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57200 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.297622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.297628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.297633] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.297638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57208 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.297644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.297652] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.297657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.297662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57216 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.297668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.297674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.297679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.297686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56512 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.297693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.297699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.297704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.297710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56520 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.297715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.297722] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.297729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.297735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56528 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.297743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.297750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.297754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.297760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56536 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.297766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.297773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.297778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.297783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56544 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.297790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.297796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.297801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.297807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56552 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.297814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.297820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.788 [2024-12-16 02:54:06.297825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.788 [2024-12-16 02:54:06.297831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56560 len:8 PRP1 0x0 PRP2 0x0 00:32:46.788 [2024-12-16 02:54:06.297837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.788 [2024-12-16 02:54:06.297882] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:46.789 [2024-12-16 02:54:06.297904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.789 [2024-12-16 02:54:06.297912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:06.297920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.789 [2024-12-16 02:54:06.297927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:06.297933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.789 [2024-12-16 02:54:06.297940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:06.297948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.789 [2024-12-16 02:54:06.297955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:06.297961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:46.789 [2024-12-16 02:54:06.297983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2e460 (9): Bad file descriptor 00:32:46.789 [2024-12-16 02:54:06.301691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:46.789 [2024-12-16 02:54:06.451808] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:46.789 10671.00 IOPS, 41.68 MiB/s [2024-12-16T01:54:17.448Z] 10809.00 IOPS, 42.22 MiB/s [2024-12-16T01:54:17.448Z] 10923.00 IOPS, 42.67 MiB/s [2024-12-16T01:54:17.448Z] 10966.25 IOPS, 42.84 MiB/s [2024-12-16T01:54:17.448Z] 11025.11 IOPS, 43.07 MiB/s [2024-12-16T01:54:17.448Z] [2024-12-16 02:54:10.715639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:111608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.789 [2024-12-16 02:54:10.715673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:111680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:111688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:111696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:111704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:111712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:111728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:111744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:111760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:111776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:111792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:111808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.715974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:111616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:46.789 [2024-12-16 02:54:10.715988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.715997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.716003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.716011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:111824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.716018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.716026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.716033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.716041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:111840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.789 [2024-12-16 02:54:10.716048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.789 [2024-12-16 02:54:10.716055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:111856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:111872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:111888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:111904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:111920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:111936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:111952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:111968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:111984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:112000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:112016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:112032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:112080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:112096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:112128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:112152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.790 [2024-12-16 02:54:10.716650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:112168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.790 [2024-12-16 02:54:10.716657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:112184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:112216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:112248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:112264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:112312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:112344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.716989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.716997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.717004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.717013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:112360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.717019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.717027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.717034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.717043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.717049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.717057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.717063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.717071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:112392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:46.791 [2024-12-16 02:54:10.717078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.717098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.791 [2024-12-16 02:54:10.717106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112400 len:8 PRP1 0x0 PRP2 0x0 00:32:46.791 [2024-12-16 02:54:10.717112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.717121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.791 [2024-12-16 02:54:10.717126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.791 [2024-12-16 02:54:10.717132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112408 len:8 PRP1 0x0 PRP2 0x0 00:32:46.791 [2024-12-16 02:54:10.717138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.717144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.791 [2024-12-16 02:54:10.717149] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.791 [2024-12-16 02:54:10.717155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112416 len:8 PRP1 0x0 PRP2 0x0 00:32:46.791 [2024-12-16 02:54:10.717161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.717168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.791 [2024-12-16 02:54:10.717173] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.791 [2024-12-16 02:54:10.717180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112424 len:8 PRP1 0x0 PRP2 0x0 00:32:46.791 [2024-12-16 02:54:10.717186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.717192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.791 [2024-12-16 02:54:10.717197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.791 [2024-12-16 02:54:10.717204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112432 len:8 PRP1 0x0 PRP2 0x0 00:32:46.791 [2024-12-16 02:54:10.717217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.791 [2024-12-16 02:54:10.717223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.791 [2024-12-16 02:54:10.717228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.791 [2024-12-16 02:54:10.717234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112440 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112448 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112456 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112464 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717314] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112472 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112480 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112488 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717385] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112496 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112504 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717434] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112512 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112520 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112528 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112536 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112544 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112552 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112560 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717595] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112568 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112576 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112584 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717670] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112592 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717687] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112600 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112608 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112616 len:8 PRP1 0x0 PRP2 0x0 00:32:46.792 [2024-12-16 02:54:10.717751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.792 [2024-12-16 02:54:10.717759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.792 [2024-12-16 02:54:10.717764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.792 [2024-12-16 02:54:10.717770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112624 len:8 PRP1 0x0 PRP2 0x0 00:32:46.793 [2024-12-16 02:54:10.717776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.793 [2024-12-16 02:54:10.717782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.793 [2024-12-16 02:54:10.717787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.793 [2024-12-16 02:54:10.717792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111624 len:8 PRP1 0x0 PRP2 0x0 00:32:46.793 [2024-12-16 02:54:10.717798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.793 [2024-12-16 02:54:10.717804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.793 [2024-12-16 02:54:10.717808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.793 [2024-12-16 02:54:10.717814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111632 len:8 PRP1 0x0 PRP2 0x0 00:32:46.793 [2024-12-16 02:54:10.717820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.793 [2024-12-16 02:54:10.717827] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.793 [2024-12-16 02:54:10.717832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.793 [2024-12-16 02:54:10.717837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111640 len:8 PRP1 0x0 PRP2 0x0 00:32:46.793 [2024-12-16 02:54:10.717843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.793 [2024-12-16 02:54:10.717854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.793 [2024-12-16 02:54:10.717859] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.793 [2024-12-16 02:54:10.728091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111648 len:8 PRP1 0x0 PRP2 0x0 00:32:46.793 [2024-12-16 02:54:10.728105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.793 [2024-12-16 02:54:10.728116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.793 [2024-12-16 02:54:10.728123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.793 [2024-12-16 02:54:10.728130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111656 len:8 PRP1 0x0 PRP2 0x0 00:32:46.793 [2024-12-16 02:54:10.728139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.793 [2024-12-16 02:54:10.728149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.793 [2024-12-16 02:54:10.728157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.793 [2024-12-16 02:54:10.728164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111664 len:8 PRP1 0x0 PRP2 0x0 00:32:46.793 [2024-12-16 02:54:10.728172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.793 [2024-12-16 02:54:10.728181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:46.793 [2024-12-16 02:54:10.728187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:46.793 [2024-12-16 02:54:10.728197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111672 len:8 PRP1 0x0 PRP2 0x0 00:32:46.793 [2024-12-16 02:54:10.728210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.793 [2024-12-16 02:54:10.728263] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:46.793 [2024-12-16 02:54:10.728291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.793 [2024-12-16 02:54:10.728301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.793 [2024-12-16 02:54:10.728311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.793 [2024-12-16 02:54:10.728320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.793 [2024-12-16 02:54:10.728330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.793 [2024-12-16 02:54:10.728339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.793 [2024-12-16 02:54:10.728348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:46.793 [2024-12-16 02:54:10.728357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:46.793 [2024-12-16 02:54:10.728367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:46.793 [2024-12-16 02:54:10.728392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc2e460 (9): Bad file descriptor 00:32:46.793 [2024-12-16 02:54:10.732119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:46.793 [2024-12-16 02:54:10.874614] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:46.793 10901.10 IOPS, 42.58 MiB/s [2024-12-16T01:54:17.452Z] 10942.36 IOPS, 42.74 MiB/s [2024-12-16T01:54:17.452Z] 10975.17 IOPS, 42.87 MiB/s [2024-12-16T01:54:17.452Z] 11003.85 IOPS, 42.98 MiB/s [2024-12-16T01:54:17.452Z] 11039.57 IOPS, 43.12 MiB/s [2024-12-16T01:54:17.452Z] 11061.53 IOPS, 43.21 MiB/s 00:32:46.793 Latency(us) 00:32:46.793 [2024-12-16T01:54:17.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.793 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:46.793 Verification LBA range: start 0x0 length 0x4000 00:32:46.793 NVMe0n1 : 15.01 11065.03 43.22 1294.68 0.00 10334.60 415.45 26089.57 00:32:46.793 [2024-12-16T01:54:17.452Z] =================================================================================================================== 00:32:46.793 [2024-12-16T01:54:17.452Z] Total : 11065.03 43.22 1294.68 0.00 10334.60 415.45 26089.57 00:32:46.793 Received shutdown signal, test time was about 15.000000 seconds 00:32:46.793 00:32:46.793 Latency(us) 00:32:46.793 [2024-12-16T01:54:17.452Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:46.793 [2024-12-16T01:54:17.452Z] =================================================================================================================== 00:32:46.793 [2024-12-16T01:54:17.452Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:46.793 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:46.793 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:46.793 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:46.793 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1150958 00:32:46.793 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:46.793 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1150958 /var/tmp/bdevperf.sock 00:32:46.793 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1150958 ']' 00:32:46.793 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:46.793 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:46.793 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:46.793 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:46.793 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:46.793 02:54:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:46.793 02:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:46.793 02:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:46.793 02:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:46.793 [2024-12-16 02:54:17.345104] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:46.793 02:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:47.051 [2024-12-16 02:54:17.545685] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:47.051 02:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:47.308 NVMe0n1 00:32:47.308 02:54:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:47.566 00:32:47.566 02:54:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:47.823 00:32:47.823 02:54:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:47.823 02:54:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:48.081 02:54:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:48.338 02:54:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:51.618 02:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:51.618 02:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:51.618 02:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:51.618 02:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1151798 00:32:51.618 02:54:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1151798 00:32:52.550 { 00:32:52.550 "results": [ 00:32:52.550 { 00:32:52.550 "job": "NVMe0n1", 00:32:52.550 "core_mask": "0x1", 00:32:52.550 "workload": "verify", 00:32:52.550 "status": "finished", 00:32:52.550 "verify_range": { 00:32:52.550 "start": 0, 00:32:52.550 "length": 16384 00:32:52.551 }, 00:32:52.551 "queue_depth": 128, 00:32:52.551 "io_size": 4096, 00:32:52.551 "runtime": 1.014815, 00:32:52.551 "iops": 11339.012529377276, 00:32:52.551 "mibps": 44.293017692879985, 00:32:52.551 "io_failed": 0, 00:32:52.551 "io_timeout": 0, 00:32:52.551 "avg_latency_us": 11245.448016983453, 00:32:52.551 "min_latency_us": 2324.967619047619, 00:32:52.551 "max_latency_us": 9175.04 00:32:52.551 } 00:32:52.551 ], 00:32:52.551 "core_count": 1 00:32:52.551 } 00:32:52.551 02:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:52.551 [2024-12-16 02:54:16.982360] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:52.551 [2024-12-16 02:54:16.982419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1150958 ] 00:32:52.551 [2024-12-16 02:54:17.056516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:52.551 [2024-12-16 02:54:17.076411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:52.551 [2024-12-16 02:54:18.761965] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:52.551 [2024-12-16 02:54:18.762010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.551 [2024-12-16 02:54:18.762021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.551 [2024-12-16 02:54:18.762030] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.551 [2024-12-16 02:54:18.762037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.551 [2024-12-16 02:54:18.762045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.551 [2024-12-16 02:54:18.762052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.551 [2024-12-16 02:54:18.762059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:52.551 [2024-12-16 02:54:18.762066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:52.551 [2024-12-16 02:54:18.762073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:52.551 [2024-12-16 02:54:18.762098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:52.551 [2024-12-16 02:54:18.762112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f3a460 (9): Bad file descriptor 00:32:52.551 [2024-12-16 02:54:18.853026] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:52.551 Running I/O for 1 seconds... 00:32:52.551 11259.00 IOPS, 43.98 MiB/s 00:32:52.551 Latency(us) 00:32:52.551 [2024-12-16T01:54:23.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:52.551 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:52.551 Verification LBA range: start 0x0 length 0x4000 00:32:52.551 NVMe0n1 : 1.01 11339.01 44.29 0.00 0.00 11245.45 2324.97 9175.04 00:32:52.551 [2024-12-16T01:54:23.210Z] =================================================================================================================== 00:32:52.551 [2024-12-16T01:54:23.210Z] Total : 11339.01 44.29 0.00 0.00 11245.45 2324.97 9175.04 00:32:52.551 02:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:52.551 02:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:52.808 02:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:53.065 02:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:53.065 02:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:53.066 02:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:53.323 02:54:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:56.600 02:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:56.600 02:54:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:56.600 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1150958 00:32:56.600 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1150958 ']' 00:32:56.600 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1150958 00:32:56.600 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:56.600 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:56.600 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1150958 00:32:56.600 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:56.600 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:56.600 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1150958' 00:32:56.600 killing process with pid 1150958 00:32:56.600 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1150958 00:32:56.600 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1150958 00:32:56.858 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:56.858 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:57.116 rmmod nvme_tcp 00:32:57.116 rmmod nvme_fabrics 00:32:57.116 rmmod nvme_keyring 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1147676 ']' 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1147676 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1147676 ']' 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1147676 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1147676 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1147676' 00:32:57.116 killing process with pid 1147676 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1147676 00:32:57.116 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1147676 00:32:57.375 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:57.375 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:57.375 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:57.375 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:57.375 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:57.375 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:57.375 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:57.375 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:57.375 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:57.375 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:57.375 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:57.375 02:54:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.390 02:54:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.390 00:32:59.390 real 0m37.177s 00:32:59.390 user 1m57.544s 00:32:59.390 sys 0m7.904s 00:32:59.390 02:54:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.390 02:54:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:59.390 ************************************ 00:32:59.390 END TEST nvmf_failover 00:32:59.390 ************************************ 00:32:59.390 02:54:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:59.390 02:54:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:59.390 02:54:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:59.390 02:54:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.390 ************************************ 00:32:59.390 START TEST nvmf_host_discovery 00:32:59.390 ************************************ 00:32:59.390 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:59.683 * Looking for test storage... 00:32:59.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:59.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.683 --rc genhtml_branch_coverage=1 00:32:59.683 --rc genhtml_function_coverage=1 00:32:59.683 --rc genhtml_legend=1 00:32:59.683 --rc geninfo_all_blocks=1 00:32:59.683 --rc geninfo_unexecuted_blocks=1 00:32:59.683 00:32:59.683 ' 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:59.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.683 --rc genhtml_branch_coverage=1 00:32:59.683 --rc genhtml_function_coverage=1 00:32:59.683 --rc genhtml_legend=1 00:32:59.683 --rc geninfo_all_blocks=1 00:32:59.683 --rc geninfo_unexecuted_blocks=1 00:32:59.683 00:32:59.683 ' 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:59.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.683 --rc genhtml_branch_coverage=1 00:32:59.683 --rc genhtml_function_coverage=1 00:32:59.683 --rc genhtml_legend=1 00:32:59.683 --rc geninfo_all_blocks=1 00:32:59.683 --rc geninfo_unexecuted_blocks=1 00:32:59.683 00:32:59.683 ' 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:59.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.683 --rc genhtml_branch_coverage=1 00:32:59.683 --rc genhtml_function_coverage=1 00:32:59.683 --rc genhtml_legend=1 00:32:59.683 --rc geninfo_all_blocks=1 00:32:59.683 --rc geninfo_unexecuted_blocks=1 00:32:59.683 00:32:59.683 ' 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.683 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:59.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:59.684 02:54:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:06.253 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:06.253 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:06.253 Found net devices under 0000:af:00.0: cvl_0_0 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:06.253 Found net devices under 0000:af:00.1: cvl_0_1 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:06.253 02:54:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:06.253 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:06.253 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:06.253 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:06.253 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:06.253 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:06.253 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:06.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:06.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:33:06.253 00:33:06.253 --- 10.0.0.2 ping statistics --- 00:33:06.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.253 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:33:06.253 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:06.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:06.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:33:06.253 00:33:06.253 --- 10.0.0.1 ping statistics --- 00:33:06.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:06.253 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:33:06.253 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:06.253 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:06.253 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1156171 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1156171 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1156171 ']' 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:06.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 [2024-12-16 02:54:36.265724] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:06.254 [2024-12-16 02:54:36.265766] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:06.254 [2024-12-16 02:54:36.340087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.254 [2024-12-16 02:54:36.361046] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:06.254 [2024-12-16 02:54:36.361082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:06.254 [2024-12-16 02:54:36.361089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:06.254 [2024-12-16 02:54:36.361095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:06.254 [2024-12-16 02:54:36.361100] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:06.254 [2024-12-16 02:54:36.361581] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 [2024-12-16 02:54:36.492057] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 [2024-12-16 02:54:36.504228] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 null0 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 null1 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1156191 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1156191 /tmp/host.sock 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1156191 ']' 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:06.254 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 [2024-12-16 02:54:36.580681] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:06.254 [2024-12-16 02:54:36.580723] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1156191 ] 00:33:06.254 [2024-12-16 02:54:36.653263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:06.254 [2024-12-16 02:54:36.676091] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:06.254 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.255 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:06.255 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.255 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:06.255 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:06.513 02:54:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.513 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.513 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:06.513 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:06.513 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:06.513 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:06.513 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.513 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:06.513 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.513 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.514 [2024-12-16 02:54:37.093726] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:06.514 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:06.772 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:06.773 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:06.773 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:06.773 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:06.773 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:06.773 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.773 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:06.773 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:06.773 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:06.773 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:06.773 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.773 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:33:06.773 02:54:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:07.339 [2024-12-16 02:54:37.830347] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:07.339 [2024-12-16 02:54:37.830374] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:07.339 [2024-12-16 02:54:37.830388] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:07.339 [2024-12-16 02:54:37.956758] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:07.597 [2024-12-16 02:54:38.051376] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:07.597 [2024-12-16 02:54:38.052121] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x122cc60:1 started. 00:33:07.598 [2024-12-16 02:54:38.053461] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:07.598 [2024-12-16 02:54:38.053476] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:07.598 [2024-12-16 02:54:38.058897] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x122cc60 was disconnected and freed. delete nvme_qpair. 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:07.856 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:07.857 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:08.116 [2024-12-16 02:54:38.596439] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x122cfe0:1 started. 00:33:08.116 [2024-12-16 02:54:38.600111] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x122cfe0 was disconnected and freed. delete nvme_qpair. 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.116 [2024-12-16 02:54:38.682024] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:08.116 [2024-12-16 02:54:38.682721] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:08.116 [2024-12-16 02:54:38.682740] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:08.116 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.375 [2024-12-16 02:54:38.811447] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:08.375 02:54:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:08.375 [2024-12-16 02:54:38.998362] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:08.375 [2024-12-16 02:54:38.998396] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:08.375 [2024-12-16 02:54:38.998404] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:08.375 [2024-12-16 02:54:38.998409] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:09.311 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.311 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:09.311 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:09.311 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.312 [2024-12-16 02:54:39.929900] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:09.312 [2024-12-16 02:54:39.929923] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:09.312 [2024-12-16 02:54:39.930805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:09.312 [2024-12-16 02:54:39.930821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:09.312 [2024-12-16 02:54:39.930830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:09.312 [2024-12-16 02:54:39.930838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:09.312 [2024-12-16 02:54:39.930851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:09.312 [2024-12-16 02:54:39.930862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:09.312 [2024-12-16 02:54:39.930870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:09.312 [2024-12-16 02:54:39.930877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:09.312 [2024-12-16 02:54:39.930884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fed70 is same with the state(6) to be set 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:09.312 [2024-12-16 02:54:39.940815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fed70 (9): Bad file descriptor 00:33:09.312 [2024-12-16 02:54:39.950856] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.312 [2024-12-16 02:54:39.950870] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.312 [2024-12-16 02:54:39.950876] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.312 [2024-12-16 02:54:39.950881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.312 [2024-12-16 02:54:39.950899] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.312 [2024-12-16 02:54:39.951100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.312 [2024-12-16 02:54:39.951116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fed70 with addr=10.0.0.2, port=4420 00:33:09.312 [2024-12-16 02:54:39.951124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fed70 is same with the state(6) to be set 00:33:09.312 [2024-12-16 02:54:39.951136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fed70 (9): Bad file descriptor 00:33:09.312 [2024-12-16 02:54:39.951153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.312 [2024-12-16 02:54:39.951160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.312 [2024-12-16 02:54:39.951168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.312 [2024-12-16 02:54:39.951174] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.312 [2024-12-16 02:54:39.951182] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.312 [2024-12-16 02:54:39.951186] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.312 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.312 [2024-12-16 02:54:39.960929] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.312 [2024-12-16 02:54:39.960939] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.312 [2024-12-16 02:54:39.960943] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.312 [2024-12-16 02:54:39.960947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.312 [2024-12-16 02:54:39.960961] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.312 [2024-12-16 02:54:39.961125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.312 [2024-12-16 02:54:39.961140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fed70 with addr=10.0.0.2, port=4420 00:33:09.312 [2024-12-16 02:54:39.961147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fed70 is same with the state(6) to be set 00:33:09.312 [2024-12-16 02:54:39.961163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fed70 (9): Bad file descriptor 00:33:09.312 [2024-12-16 02:54:39.961174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.312 [2024-12-16 02:54:39.961180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.312 [2024-12-16 02:54:39.961187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.312 [2024-12-16 02:54:39.961193] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.312 [2024-12-16 02:54:39.961197] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.312 [2024-12-16 02:54:39.961201] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.572 [2024-12-16 02:54:39.970992] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.572 [2024-12-16 02:54:39.971004] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.572 [2024-12-16 02:54:39.971009] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.572 [2024-12-16 02:54:39.971013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.572 [2024-12-16 02:54:39.971027] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.572 [2024-12-16 02:54:39.971207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.572 [2024-12-16 02:54:39.971219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fed70 with addr=10.0.0.2, port=4420 00:33:09.572 [2024-12-16 02:54:39.971227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fed70 is same with the state(6) to be set 00:33:09.572 [2024-12-16 02:54:39.971238] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fed70 (9): Bad file descriptor 00:33:09.572 [2024-12-16 02:54:39.971253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.572 [2024-12-16 02:54:39.971260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.572 [2024-12-16 02:54:39.971267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.572 [2024-12-16 02:54:39.971276] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.572 [2024-12-16 02:54:39.971280] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.572 [2024-12-16 02:54:39.971284] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.572 [2024-12-16 02:54:39.981057] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.572 [2024-12-16 02:54:39.981071] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.572 [2024-12-16 02:54:39.981075] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.572 [2024-12-16 02:54:39.981078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.572 [2024-12-16 02:54:39.981092] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.572 [2024-12-16 02:54:39.981270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.572 [2024-12-16 02:54:39.981283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fed70 with addr=10.0.0.2, port=4420 00:33:09.572 [2024-12-16 02:54:39.981291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fed70 is same with the state(6) to be set 00:33:09.572 [2024-12-16 02:54:39.981302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fed70 (9): Bad file descriptor 00:33:09.572 [2024-12-16 02:54:39.981318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.572 [2024-12-16 02:54:39.981326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.572 [2024-12-16 02:54:39.981332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.572 [2024-12-16 02:54:39.981338] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.572 [2024-12-16 02:54:39.981343] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.572 [2024-12-16 02:54:39.981347] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.572 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.572 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.572 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:09.572 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:09.572 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:09.572 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.572 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:09.572 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:09.572 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:09.573 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:09.573 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.573 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:09.573 [2024-12-16 02:54:39.991123] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.573 [2024-12-16 02:54:39.991137] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.573 [2024-12-16 02:54:39.991142] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.573 [2024-12-16 02:54:39.991146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.573 [2024-12-16 02:54:39.991161] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.573 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.573 [2024-12-16 02:54:39.991268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.573 [2024-12-16 02:54:39.991280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fed70 with addr=10.0.0.2, port=4420 00:33:09.573 [2024-12-16 02:54:39.991287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fed70 is same with the state(6) to be set 00:33:09.573 [2024-12-16 02:54:39.991298] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fed70 (9): Bad file descriptor 00:33:09.573 [2024-12-16 02:54:39.991306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.573 [2024-12-16 02:54:39.991313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.573 [2024-12-16 02:54:39.991320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.573 [2024-12-16 02:54:39.991326] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.573 [2024-12-16 02:54:39.991332] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.573 [2024-12-16 02:54:39.991337] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.573 02:54:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:09.573 [2024-12-16 02:54:40.001192] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.573 [2024-12-16 02:54:40.001205] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.573 [2024-12-16 02:54:40.001209] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.573 [2024-12-16 02:54:40.001213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.573 [2024-12-16 02:54:40.001228] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.573 [2024-12-16 02:54:40.001382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.573 [2024-12-16 02:54:40.001396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fed70 with addr=10.0.0.2, port=4420 00:33:09.573 [2024-12-16 02:54:40.001404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fed70 is same with the state(6) to be set 00:33:09.573 [2024-12-16 02:54:40.001415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fed70 (9): Bad file descriptor 00:33:09.573 [2024-12-16 02:54:40.001425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.573 [2024-12-16 02:54:40.001432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.573 [2024-12-16 02:54:40.001441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.573 [2024-12-16 02:54:40.001447] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.573 [2024-12-16 02:54:40.001452] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.573 [2024-12-16 02:54:40.001461] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.573 [2024-12-16 02:54:40.011259] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:09.573 [2024-12-16 02:54:40.011269] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:09.573 [2024-12-16 02:54:40.011273] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:09.573 [2024-12-16 02:54:40.011277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:09.573 [2024-12-16 02:54:40.011290] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:09.573 [2024-12-16 02:54:40.011496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:09.573 [2024-12-16 02:54:40.011509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11fed70 with addr=10.0.0.2, port=4420 00:33:09.573 [2024-12-16 02:54:40.011517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11fed70 is same with the state(6) to be set 00:33:09.573 [2024-12-16 02:54:40.011528] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11fed70 (9): Bad file descriptor 00:33:09.573 [2024-12-16 02:54:40.011538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:09.573 [2024-12-16 02:54:40.011544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:09.573 [2024-12-16 02:54:40.011552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:09.573 [2024-12-16 02:54:40.011558] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:09.573 [2024-12-16 02:54:40.011563] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:09.573 [2024-12-16 02:54:40.011566] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:09.573 [2024-12-16 02:54:40.016773] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:09.573 [2024-12-16 02:54:40.016795] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:09.573 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:09.574 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.832 02:54:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:10.767 [2024-12-16 02:54:41.308321] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:10.767 [2024-12-16 02:54:41.308338] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:10.767 [2024-12-16 02:54:41.308349] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:10.767 [2024-12-16 02:54:41.394603] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:11.335 [2024-12-16 02:54:41.699909] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:11.335 [2024-12-16 02:54:41.700494] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1238d60:1 started. 00:33:11.335 [2024-12-16 02:54:41.702097] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:11.335 [2024-12-16 02:54:41.702120] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:11.335 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.335 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:11.335 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:11.335 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:11.335 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:11.335 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:11.335 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:11.335 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:11.335 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:11.335 [2024-12-16 02:54:41.708266] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1238d60 was disconnected and freed. delete nvme_qpair. 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.336 request: 00:33:11.336 { 00:33:11.336 "name": "nvme", 00:33:11.336 "trtype": "tcp", 00:33:11.336 "traddr": "10.0.0.2", 00:33:11.336 "adrfam": "ipv4", 00:33:11.336 "trsvcid": "8009", 00:33:11.336 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:11.336 "wait_for_attach": true, 00:33:11.336 "method": "bdev_nvme_start_discovery", 00:33:11.336 "req_id": 1 00:33:11.336 } 00:33:11.336 Got JSON-RPC error response 00:33:11.336 response: 00:33:11.336 { 00:33:11.336 "code": -17, 00:33:11.336 "message": "File exists" 00:33:11.336 } 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.336 request: 00:33:11.336 { 00:33:11.336 "name": "nvme_second", 00:33:11.336 "trtype": "tcp", 00:33:11.336 "traddr": "10.0.0.2", 00:33:11.336 "adrfam": "ipv4", 00:33:11.336 "trsvcid": "8009", 00:33:11.336 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:11.336 "wait_for_attach": true, 00:33:11.336 "method": "bdev_nvme_start_discovery", 00:33:11.336 "req_id": 1 00:33:11.336 } 00:33:11.336 Got JSON-RPC error response 00:33:11.336 response: 00:33:11.336 { 00:33:11.336 "code": -17, 00:33:11.336 "message": "File exists" 00:33:11.336 } 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.336 02:54:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.712 [2024-12-16 02:54:42.941448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:12.712 [2024-12-16 02:54:42.941476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1236660 with addr=10.0.0.2, port=8010 00:33:12.712 [2024-12-16 02:54:42.941488] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:12.712 [2024-12-16 02:54:42.941495] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:12.712 [2024-12-16 02:54:42.941501] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:13.649 [2024-12-16 02:54:43.943967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:13.649 [2024-12-16 02:54:43.943994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1236660 with addr=10.0.0.2, port=8010 00:33:13.649 [2024-12-16 02:54:43.944006] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:13.649 [2024-12-16 02:54:43.944012] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:13.649 [2024-12-16 02:54:43.944018] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:14.584 [2024-12-16 02:54:44.946130] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:14.584 request: 00:33:14.584 { 00:33:14.584 "name": "nvme_second", 00:33:14.584 "trtype": "tcp", 00:33:14.584 "traddr": "10.0.0.2", 00:33:14.584 "adrfam": "ipv4", 00:33:14.584 "trsvcid": "8010", 00:33:14.584 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:14.584 "wait_for_attach": false, 00:33:14.584 "attach_timeout_ms": 3000, 00:33:14.584 "method": "bdev_nvme_start_discovery", 00:33:14.584 "req_id": 1 00:33:14.584 } 00:33:14.584 Got JSON-RPC error response 00:33:14.584 response: 00:33:14.584 { 00:33:14.584 "code": -110, 00:33:14.584 "message": "Connection timed out" 00:33:14.584 } 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1156191 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:14.584 02:54:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:14.584 rmmod nvme_tcp 00:33:14.584 rmmod nvme_fabrics 00:33:14.584 rmmod nvme_keyring 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1156171 ']' 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1156171 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1156171 ']' 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1156171 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1156171 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1156171' 00:33:14.584 killing process with pid 1156171 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1156171 00:33:14.584 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1156171 00:33:14.842 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:14.842 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:14.842 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:14.842 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:14.842 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:14.842 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:14.842 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:14.842 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:14.842 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:14.842 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.842 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:14.842 02:54:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.758 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:16.758 00:33:16.758 real 0m17.336s 00:33:16.758 user 0m20.726s 00:33:16.758 sys 0m5.765s 00:33:16.758 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:16.758 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.758 ************************************ 00:33:16.758 END TEST nvmf_host_discovery 00:33:16.758 ************************************ 00:33:16.758 02:54:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:16.758 02:54:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:16.758 02:54:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:16.758 02:54:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.758 ************************************ 00:33:16.758 START TEST nvmf_host_multipath_status 00:33:16.758 ************************************ 00:33:16.758 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:17.017 * Looking for test storage... 00:33:17.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:17.017 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:17.017 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:33:17.017 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:17.017 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:17.017 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:17.017 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:17.017 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:17.017 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:17.017 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:17.017 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:17.017 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:17.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.018 --rc genhtml_branch_coverage=1 00:33:17.018 --rc genhtml_function_coverage=1 00:33:17.018 --rc genhtml_legend=1 00:33:17.018 --rc geninfo_all_blocks=1 00:33:17.018 --rc geninfo_unexecuted_blocks=1 00:33:17.018 00:33:17.018 ' 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:17.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.018 --rc genhtml_branch_coverage=1 00:33:17.018 --rc genhtml_function_coverage=1 00:33:17.018 --rc genhtml_legend=1 00:33:17.018 --rc geninfo_all_blocks=1 00:33:17.018 --rc geninfo_unexecuted_blocks=1 00:33:17.018 00:33:17.018 ' 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:17.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.018 --rc genhtml_branch_coverage=1 00:33:17.018 --rc genhtml_function_coverage=1 00:33:17.018 --rc genhtml_legend=1 00:33:17.018 --rc geninfo_all_blocks=1 00:33:17.018 --rc geninfo_unexecuted_blocks=1 00:33:17.018 00:33:17.018 ' 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:17.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.018 --rc genhtml_branch_coverage=1 00:33:17.018 --rc genhtml_function_coverage=1 00:33:17.018 --rc genhtml_legend=1 00:33:17.018 --rc geninfo_all_blocks=1 00:33:17.018 --rc geninfo_unexecuted_blocks=1 00:33:17.018 00:33:17.018 ' 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:17.018 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:17.018 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.019 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.019 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.019 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:17.019 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:17.019 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:17.019 02:54:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:23.589 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:23.589 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:23.589 Found net devices under 0000:af:00.0: cvl_0_0 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:23.589 Found net devices under 0000:af:00.1: cvl_0_1 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:23.589 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:23.590 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:23.590 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:33:23.590 00:33:23.590 --- 10.0.0.2 ping statistics --- 00:33:23.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.590 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:23.590 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:23.590 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:33:23.590 00:33:23.590 --- 10.0.0.1 ping statistics --- 00:33:23.590 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:23.590 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1161168 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1161168 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1161168 ']' 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:23.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:23.590 [2024-12-16 02:54:53.644192] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:23.590 [2024-12-16 02:54:53.644233] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:23.590 [2024-12-16 02:54:53.721902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:23.590 [2024-12-16 02:54:53.743508] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:23.590 [2024-12-16 02:54:53.743545] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:23.590 [2024-12-16 02:54:53.743552] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:23.590 [2024-12-16 02:54:53.743559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:23.590 [2024-12-16 02:54:53.743564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:23.590 [2024-12-16 02:54:53.744668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.590 [2024-12-16 02:54:53.744671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1161168 00:33:23.590 02:54:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:23.590 [2024-12-16 02:54:54.047697] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:23.590 02:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:23.849 Malloc0 00:33:23.850 02:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:24.109 02:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:24.109 02:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:24.367 [2024-12-16 02:54:54.877205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:24.367 02:54:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:24.629 [2024-12-16 02:54:55.061664] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:24.629 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:24.629 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1161412 00:33:24.629 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:24.629 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1161412 /var/tmp/bdevperf.sock 00:33:24.629 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1161412 ']' 00:33:24.629 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:24.629 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:24.629 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:24.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:24.629 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:24.629 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:24.889 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:24.889 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:24.889 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:24.889 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:25.457 Nvme0n1 00:33:25.457 02:54:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:26.024 Nvme0n1 00:33:26.024 02:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:26.024 02:54:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:27.927 02:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:27.927 02:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:28.185 02:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:28.444 02:54:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:29.380 02:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:29.381 02:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:29.381 02:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.381 02:54:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:29.639 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.639 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:29.639 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.639 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:29.897 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:29.898 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:29.898 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.898 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:29.898 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.898 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:29.898 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.898 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:30.156 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.156 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:30.156 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.156 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:30.414 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.414 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:30.414 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.414 02:55:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:30.672 02:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:30.672 02:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:30.672 02:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:30.931 02:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:30.931 02:55:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:32.308 02:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:32.308 02:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:32.308 02:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.308 02:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:32.308 02:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:32.308 02:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:32.308 02:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.308 02:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:32.567 02:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.567 02:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:32.567 02:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.567 02:55:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:32.567 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.567 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:32.567 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.567 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:32.825 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.825 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:32.825 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.825 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:33.084 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.084 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:33.084 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.084 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:33.343 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.343 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:33.343 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:33.343 02:55:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:33.602 02:55:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:34.541 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:34.541 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:34.541 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.541 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:34.799 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.799 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:34.799 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.799 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:35.057 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:35.057 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:35.057 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.057 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:35.315 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.315 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:35.315 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.315 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:35.574 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.574 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:35.574 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.574 02:55:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:35.574 02:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.574 02:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:35.574 02:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.574 02:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:35.832 02:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:35.832 02:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:35.832 02:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:36.091 02:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:36.349 02:55:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:37.286 02:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:37.286 02:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:37.286 02:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.286 02:55:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:37.544 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.544 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:37.544 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.544 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:37.803 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:37.803 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:37.803 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.803 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:38.061 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.061 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:38.061 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:38.061 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.061 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.061 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:38.061 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.061 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:38.321 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.321 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:38.321 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:38.321 02:55:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:38.579 02:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:38.579 02:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:38.579 02:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:38.838 02:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:39.097 02:55:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:40.032 02:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:40.032 02:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:40.032 02:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.032 02:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:40.290 02:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:40.290 02:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:40.290 02:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.290 02:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:40.290 02:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:40.290 02:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:40.290 02:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.290 02:55:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:40.548 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.548 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:40.548 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.548 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:40.807 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.807 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:40.807 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.807 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:41.065 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:41.065 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:41.065 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.065 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:41.065 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:41.065 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:41.065 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:41.324 02:55:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:41.582 02:55:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:42.516 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:42.516 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:42.516 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.516 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:42.775 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:42.775 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:42.775 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.775 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:43.033 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.033 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:43.033 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.033 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:43.292 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.292 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:43.292 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.292 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:43.551 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.551 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:43.551 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.551 02:55:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:43.551 02:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:43.551 02:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:43.551 02:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.551 02:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:43.810 02:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.810 02:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:44.069 02:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:44.069 02:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:44.327 02:55:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:44.586 02:55:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:45.520 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:45.520 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:45.520 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.520 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:45.779 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.779 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:45.779 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.779 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:45.779 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.779 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:45.779 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.779 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:46.038 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.038 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:46.038 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.038 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:46.296 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.296 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:46.296 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.296 02:55:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:46.555 02:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.555 02:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:46.555 02:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.555 02:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:46.813 02:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.813 02:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:46.813 02:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:47.071 02:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:47.071 02:55:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:48.213 02:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:48.213 02:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:48.213 02:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.213 02:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:48.473 02:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.473 02:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:48.473 02:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.473 02:55:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:48.473 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.473 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:48.473 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.473 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:48.732 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.732 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:48.732 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.732 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:48.990 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.990 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:48.990 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.990 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:49.249 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.249 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:49.249 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.249 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:49.507 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.507 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:49.507 02:55:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:49.508 02:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:49.766 02:55:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:50.702 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:50.702 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:50.703 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.703 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:50.961 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.961 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:50.961 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.961 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:51.220 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.220 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:51.220 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.220 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:51.479 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.479 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:51.479 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.479 02:55:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:51.738 02:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.738 02:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:51.738 02:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.738 02:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:51.997 02:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.997 02:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:51.997 02:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.997 02:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:51.997 02:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.997 02:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:51.997 02:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:52.255 02:55:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:52.514 02:55:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:53.451 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:53.451 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:53.451 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.451 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:53.710 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.710 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:53.710 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.710 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:53.969 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:53.969 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:53.969 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.969 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:54.228 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.228 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:54.228 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.228 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:54.228 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.228 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:54.228 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.228 02:55:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:54.487 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.487 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:54.487 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.487 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:54.745 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:54.745 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1161412 00:33:54.745 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1161412 ']' 00:33:54.745 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1161412 00:33:54.745 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:54.745 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:54.745 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1161412 00:33:54.745 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:54.745 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:54.745 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1161412' 00:33:54.745 killing process with pid 1161412 00:33:54.745 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1161412 00:33:54.745 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1161412 00:33:54.745 { 00:33:54.745 "results": [ 00:33:54.745 { 00:33:54.745 "job": "Nvme0n1", 00:33:54.745 "core_mask": "0x4", 00:33:54.745 "workload": "verify", 00:33:54.745 "status": "terminated", 00:33:54.745 "verify_range": { 00:33:54.745 "start": 0, 00:33:54.745 "length": 16384 00:33:54.745 }, 00:33:54.745 "queue_depth": 128, 00:33:54.745 "io_size": 4096, 00:33:54.745 "runtime": 28.721689, 00:33:54.745 "iops": 10626.150850668984, 00:33:54.745 "mibps": 41.50840176042572, 00:33:54.745 "io_failed": 0, 00:33:54.745 "io_timeout": 0, 00:33:54.745 "avg_latency_us": 12025.129331130882, 00:33:54.745 "min_latency_us": 1201.4933333333333, 00:33:54.745 "max_latency_us": 3019898.88 00:33:54.745 } 00:33:54.745 ], 00:33:54.745 "core_count": 1 00:33:54.745 } 00:33:55.018 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1161412 00:33:55.018 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:55.018 [2024-12-16 02:54:55.136682] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:55.018 [2024-12-16 02:54:55.136733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1161412 ] 00:33:55.018 [2024-12-16 02:54:55.210140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.018 [2024-12-16 02:54:55.232501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:55.018 Running I/O for 90 seconds... 00:33:55.018 11435.00 IOPS, 44.67 MiB/s [2024-12-16T01:55:25.677Z] 11476.00 IOPS, 44.83 MiB/s [2024-12-16T01:55:25.677Z] 11471.33 IOPS, 44.81 MiB/s [2024-12-16T01:55:25.677Z] 11472.00 IOPS, 44.81 MiB/s [2024-12-16T01:55:25.677Z] 11502.00 IOPS, 44.93 MiB/s [2024-12-16T01:55:25.677Z] 11474.83 IOPS, 44.82 MiB/s [2024-12-16T01:55:25.677Z] 11475.71 IOPS, 44.83 MiB/s [2024-12-16T01:55:25.677Z] 11484.00 IOPS, 44.86 MiB/s [2024-12-16T01:55:25.677Z] 11495.33 IOPS, 44.90 MiB/s [2024-12-16T01:55:25.677Z] 11483.60 IOPS, 44.86 MiB/s [2024-12-16T01:55:25.677Z] 11494.18 IOPS, 44.90 MiB/s [2024-12-16T01:55:25.677Z] 11494.50 IOPS, 44.90 MiB/s [2024-12-16T01:55:25.677Z] [2024-12-16 02:55:09.275832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.018 [2024-12-16 02:55:09.275891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.275925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.275934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.275947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.275955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.275967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.275975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.275987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.275994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:119496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:119536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:119552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:119560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:119568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:119576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:119592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:119600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:119608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:119648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:119664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:119672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:119696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:119712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:119720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:119728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.019 [2024-12-16 02:55:09.276910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.019 [2024-12-16 02:55:09.276917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.277950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:119744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.020 [2024-12-16 02:55:09.277960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.277974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:119752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.020 [2024-12-16 02:55:09.277981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.277996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:119760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.020 [2024-12-16 02:55:09.278003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.020 [2024-12-16 02:55:09.278024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:119776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.020 [2024-12-16 02:55:09.278044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.020 [2024-12-16 02:55:09.278066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.020 [2024-12-16 02:55:09.278098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.020 [2024-12-16 02:55:09.278671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.020 [2024-12-16 02:55:09.278844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.020 [2024-12-16 02:55:09.278855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.278871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.278878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.278894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.278901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.278916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.278923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.278938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.278950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.278965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.278972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.278987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.278994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:119128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:119168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:119176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:119184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:119208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:119232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:119256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:119272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:119288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:119312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:119328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:119336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:119344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.021 [2024-12-16 02:55:09.279856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.021 [2024-12-16 02:55:09.279873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:119352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:09.279880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:09.279898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:09.279906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:09.279923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:09.279930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:09.279947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:09.279955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:09.279973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:09.279979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:09.279996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:09.280003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:09.280022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:09.280029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:09.280045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:119408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:09.280052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:09.280072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:119416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:09.280080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.022 11224.69 IOPS, 43.85 MiB/s [2024-12-16T01:55:25.681Z] 10422.93 IOPS, 40.71 MiB/s [2024-12-16T01:55:25.681Z] 9728.07 IOPS, 38.00 MiB/s [2024-12-16T01:55:25.681Z] 9346.31 IOPS, 36.51 MiB/s [2024-12-16T01:55:25.681Z] 9465.59 IOPS, 36.97 MiB/s [2024-12-16T01:55:25.681Z] 9577.94 IOPS, 37.41 MiB/s [2024-12-16T01:55:25.681Z] 9756.42 IOPS, 38.11 MiB/s [2024-12-16T01:55:25.681Z] 9936.20 IOPS, 38.81 MiB/s [2024-12-16T01:55:25.681Z] 10091.10 IOPS, 39.42 MiB/s [2024-12-16T01:55:25.681Z] 10144.95 IOPS, 39.63 MiB/s [2024-12-16T01:55:25.681Z] 10197.13 IOPS, 39.83 MiB/s [2024-12-16T01:55:25.681Z] 10262.12 IOPS, 40.09 MiB/s [2024-12-16T01:55:25.681Z] 10393.88 IOPS, 40.60 MiB/s [2024-12-16T01:55:25.681Z] 10520.12 IOPS, 41.09 MiB/s [2024-12-16T01:55:25.681Z] [2024-12-16 02:55:22.988915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.988953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.988974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:64 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.988981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.988994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.989001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.989026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.989045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.989065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.989084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.989114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:22.989133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:22.989153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.989173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.989193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.989213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.989231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.989251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.989274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:22.989293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.989307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:22.989314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.991373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:22.991397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.991413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.991420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.991433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.991441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.991453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.991460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.991472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.991479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.991492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.991499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.991511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.991518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.991530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.991537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.991550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.991557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.991569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.022 [2024-12-16 02:55:22.991579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.991923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.022 [2024-12-16 02:55:22.991938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.022 [2024-12-16 02:55:22.991952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.991961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.991973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.991981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.991994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.023 [2024-12-16 02:55:22.992462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.992522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.992535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.023 [2024-12-16 02:55:22.992541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.993487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.993504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.993519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.993526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.993538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.993546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.993558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.993566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.023 [2024-12-16 02:55:22.993578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.023 [2024-12-16 02:55:22.993584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.993608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.993628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.993647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.993668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.993687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.024 [2024-12-16 02:55:22.993707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.024 [2024-12-16 02:55:22.993729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.024 [2024-12-16 02:55:22.993748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.993776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.993796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.993815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.993835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.993861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.024 [2024-12-16 02:55:22.993882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.024 [2024-12-16 02:55:22.993902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.024 [2024-12-16 02:55:22.993921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.024 [2024-12-16 02:55:22.993940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.024 [2024-12-16 02:55:22.993960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.024 [2024-12-16 02:55:22.993979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.993991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.024 [2024-12-16 02:55:22.993998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.024 [2024-12-16 02:55:22.994018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.024 [2024-12-16 02:55:22.994118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.024 [2024-12-16 02:55:22.994844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.024 [2024-12-16 02:55:22.994858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.994873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.994880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.994892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.994899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.994912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.994919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.994931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.994938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.994950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.994957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.994970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.994977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.994990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.994996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.995974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.995986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.995994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.996013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.996032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.996053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.996072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.996091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.996111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.996131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.996151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.996172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.996192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.996216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.025 [2024-12-16 02:55:22.996235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.996254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.996274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.996293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.025 [2024-12-16 02:55:22.996306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.025 [2024-12-16 02:55:22.996313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.996611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.996633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.996653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.996672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.996692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.996711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.996735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.996754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.996775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.996794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.996814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.996833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.996858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.996877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.996896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.996915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.996927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.996935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.998391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.998658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.998678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.998698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.998717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.998737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.998796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.026 [2024-12-16 02:55:22.998817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.026 [2024-12-16 02:55:22.998831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.026 [2024-12-16 02:55:22.998839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.998856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:22.998864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.998877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:22.998884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.998897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:22.998905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.998918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:22.998927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.998939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:22.998947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.998960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:22.998967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.998979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:22.998987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.999000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:22.999007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.999020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:22.999026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.999039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:22.999046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.999059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:22.999066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.999078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:22.999085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.999099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:22.999107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.999120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:22.999127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.999139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:22.999146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.999158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:22.999166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:22.999180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:22.999186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:23.001432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:23.001454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:23.001474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:23.001494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:23.001513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:23.001532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:23.001552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.027 [2024-12-16 02:55:23.001572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:23.001591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:23.001610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:23.001630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:23.001656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:23.001676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:23.001696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:23.001716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:23.001736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:23.001755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.027 [2024-12-16 02:55:23.001775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.027 [2024-12-16 02:55:23.001788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.001795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.001807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.001814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.001826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.001834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.001852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.028 [2024-12-16 02:55:23.001859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.001872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.001879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.001892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.001901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.001913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.001920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.001932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.028 [2024-12-16 02:55:23.001939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.001952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.028 [2024-12-16 02:55:23.001962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.001974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.001981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.001993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.002002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.002021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.028 [2024-12-16 02:55:23.002040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.028 [2024-12-16 02:55:23.002059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.002079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.002098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.002118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.002137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.002159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.002178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.028 [2024-12-16 02:55:23.002198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.028 [2024-12-16 02:55:23.002217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.028 [2024-12-16 02:55:23.002236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.028 [2024-12-16 02:55:23.002257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.002269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.028 [2024-12-16 02:55:23.002279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.028 [2024-12-16 02:55:23.003048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.028 [2024-12-16 02:55:23.003071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.028 [2024-12-16 02:55:23.003092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.003111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.003131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.003153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.003173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.003192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.003211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.003231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.003250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.003270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.003288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.003308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.028 [2024-12-16 02:55:23.003327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.028 [2024-12-16 02:55:23.003340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.003349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.003716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.003743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.003762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.003781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.003800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.003820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.003839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.003866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.003885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.003904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.003923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.003942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.003962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.003982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.003994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.004002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.004014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.004023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.004036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.004043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.004057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.004064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.004077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.004084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.011365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.011375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.011388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.011396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.011409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.011416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.011429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.011436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.011449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.011456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.012374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.012397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.012421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.012441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.012460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.012481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.012500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.012520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.012540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.012562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.012581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.012609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.012629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.029 [2024-12-16 02:55:23.012649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.012671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.029 [2024-12-16 02:55:23.012684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.029 [2024-12-16 02:55:23.012692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.012704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.012711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.012725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.012733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.012745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.012753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.012765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.012772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.012785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.012792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.012805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.012812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.012825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.012832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.012844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.012859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.012873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.012895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.012914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.012924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.012941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.012953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.012970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.012981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.012997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.013007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.013025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.013035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.014691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.014712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.014732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.014742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.014760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.014770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.014787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.014797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.014814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.014824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.014841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.014857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.014874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.014885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.014901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.014912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.014929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.014946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.014964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.014974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.014992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.015002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.015030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.015057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.015083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.015111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.015139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.015167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.015195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.015223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.015251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.015278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.015307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.015334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.015362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.030 [2024-12-16 02:55:23.015389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.030 [2024-12-16 02:55:23.015406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.030 [2024-12-16 02:55:23.015416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.016518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.016539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.016559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.016569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.016587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.016596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.016613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.016623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.016640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.016650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.016667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.016676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.016693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.016703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.016719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.031 [2024-12-16 02:55:23.016732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.016750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.031 [2024-12-16 02:55:23.016759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.016776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.031 [2024-12-16 02:55:23.016786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.016803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.031 [2024-12-16 02:55:23.016813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.016829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.031 [2024-12-16 02:55:23.016839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.016866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.031 [2024-12-16 02:55:23.016877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.017874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.017895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.017914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.017926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.017944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.017953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.017970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.017980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.017997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.018007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.018034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.018064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.031 [2024-12-16 02:55:23.018092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.031 [2024-12-16 02:55:23.018119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.031 [2024-12-16 02:55:23.018146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.031 [2024-12-16 02:55:23.018173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.031 [2024-12-16 02:55:23.018200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.018226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.031 [2024-12-16 02:55:23.018253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.031 [2024-12-16 02:55:23.018279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.018305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.018332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.018358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.018386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.031 [2024-12-16 02:55:23.018405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.031 [2024-12-16 02:55:23.018414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.018432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.018443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.018460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.018472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.018489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.018499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.018516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.018525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.018542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.018552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.018569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.018578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.018595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.018605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.018622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.018631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.019674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.019692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.019712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.019723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.019740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.019751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.019771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.019781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.019798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.019808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.019826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.019836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.019860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.019872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.020579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.020610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.020639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.020667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.020694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.020722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.020750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.020777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.020808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.020837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.020873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.020901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.020928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.020956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.020972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.020983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.021000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.021010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.021028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.021038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.021055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.021066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.021083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.021093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.021110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.021120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.021139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.021152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.021171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.032 [2024-12-16 02:55:23.021182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.021198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.021211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.021229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.032 [2024-12-16 02:55:23.021243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.032 [2024-12-16 02:55:23.021262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.021273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.021292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.021303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.021845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.021871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.021892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.021903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.021922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.021932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.021950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.021961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.021978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.021989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.022006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.022017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.022034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.022046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.022069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.022080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.022098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.022109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.022128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.022138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.022155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.022166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.022184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.022194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.023111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.023141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.023169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.023196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.023230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.023250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.023271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.023293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.023314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.023334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.023353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.023373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.023393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.023412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.023433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.023452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.023472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.023493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.023513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.023536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.023556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.023575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.023595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.033 [2024-12-16 02:55:23.023614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.023633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.033 [2024-12-16 02:55:23.023653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.033 [2024-12-16 02:55:23.023665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.023672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.023684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.023691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.023704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.023711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.025495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.025518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.025538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.025559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.025579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.025678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.025697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.025738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.025798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.025839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.025905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.025964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.025983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.025995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.026002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.026016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.026023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.026035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.026042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.026054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.026061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.026074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.026082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.026094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.026100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.026112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.026119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.026132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.034 [2024-12-16 02:55:23.026139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.027613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.027633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.027649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.027657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.027672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.034 [2024-12-16 02:55:23.027679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.034 [2024-12-16 02:55:23.027693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.027701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.027722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.027745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.027767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.027788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.027809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.027830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.027855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.027876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.027897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.027918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.027940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.027960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.027981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.027994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.028002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.028019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.028027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.028040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.028049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.028062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.028070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.028933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.028950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.028964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.028973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.028986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.028994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.029013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.029034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.029053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:2288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.029073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.029093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.029112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.029145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.029164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.029184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.029205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.029225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.029244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.029264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.035 [2024-12-16 02:55:23.029284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.029305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.029324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.029343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.029364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.035 [2024-12-16 02:55:23.029385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.035 [2024-12-16 02:55:23.029398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.029405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.029417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.029424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.029935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:2632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.029948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.029962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.029970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.029982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.029990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.030009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.030029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.030050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.030070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:2392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.030173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.030201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.030284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.030755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.030778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.030910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.030930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.030964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.030972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.031542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.031556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.031570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.031579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.031592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.031600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.031613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.036 [2024-12-16 02:55:23.031621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.036 [2024-12-16 02:55:23.031634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.036 [2024-12-16 02:55:23.031642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.031663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.031687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.031708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.031730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.031750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.031770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.031791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:2616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.031812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.031832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.031861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.031882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.031903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.031924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.031948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.031969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.031982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.031990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.032003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.032011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.032023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.032031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.032044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.032052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.032065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.032073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.032086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.032095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.032108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.032117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.033168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.033192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.033212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.033235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.033257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.033276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.033295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.033316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.033335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.033355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.033728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.033753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.033773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.033793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.033814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.033833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.037 [2024-12-16 02:55:23.033862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.037 [2024-12-16 02:55:23.033876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.037 [2024-12-16 02:55:23.033883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.033896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.033903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.033915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.033923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.033936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.033943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.033955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.033963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.033975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.033984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.033997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.034006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.034025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.034046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.034065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.034085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.034106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.034126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.034145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.034165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.034454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.034476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.034496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.034517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.034538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.034558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.034578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.034598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.034622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.034642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.034655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.034663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.035741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.035759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.035774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.035783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.035796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.035803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.035816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.035823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.035835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.035843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.035862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.035869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.035881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:2808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.035888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.035901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.035909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.035922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.035928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.035941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.038 [2024-12-16 02:55:23.035951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.035964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.035972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.035984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.035991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.036003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.036016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.036030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.036037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.036050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.038 [2024-12-16 02:55:23.036058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.038 [2024-12-16 02:55:23.036071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.036080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.036100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.036121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.036142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.036163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.036184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.036205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.036229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.036251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.036273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.036295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:2888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.036316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.036339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.036359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.036372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.036380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.037680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.037699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.037723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.037743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.037984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.037991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.038003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.038011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.038024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.038035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.038048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.038056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.039479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.039497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.039512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.039521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.039533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.039 [2024-12-16 02:55:23.039540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.039553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.039 [2024-12-16 02:55:23.039559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.039 [2024-12-16 02:55:23.039572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.039579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.039599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.039618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.039639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.039664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.039683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.039703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.039723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.039743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.039763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.039783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.039803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.039824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.039843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.039870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.039890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.039914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.039934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.039953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.039973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.039986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.039993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.040012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.040032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.040052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.040071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.040092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.040112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.040132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.040153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.040174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.040194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.040214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.040235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.040859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.040 [2024-12-16 02:55:23.040882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:55.040 [2024-12-16 02:55:23.040895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.040 [2024-12-16 02:55:23.040903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.040916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.040924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.040937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.040944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.040957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.040965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.040977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.040985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.040998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.041009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.041022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.041030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.041043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.041051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.041064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.041072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.041084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.041092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.041105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.041113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.042451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.042634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.042694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.042773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.042794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:3304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.042816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.042881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.042922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.042944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.042963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.041 [2024-12-16 02:55:23.042982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.041 [2024-12-16 02:55:23.042995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.041 [2024-12-16 02:55:23.043001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.043022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.043043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.043062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.043082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.043102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.043122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.043142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.043741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.043765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.043785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.043805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.043825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.043845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.043875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.043887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.043894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.045495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.045515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.045534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.045554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.045695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.045715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.045735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.042 [2024-12-16 02:55:23.045857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.042 [2024-12-16 02:55:23.045918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:55.042 [2024-12-16 02:55:23.045931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.043 [2024-12-16 02:55:23.045938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:55.043 [2024-12-16 02:55:23.045951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.043 [2024-12-16 02:55:23.045958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:55.043 [2024-12-16 02:55:23.045970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.043 [2024-12-16 02:55:23.045978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:55.043 [2024-12-16 02:55:23.045991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.043 [2024-12-16 02:55:23.045998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:55.043 [2024-12-16 02:55:23.046011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.043 [2024-12-16 02:55:23.046018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:55.043 [2024-12-16 02:55:23.046031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.043 [2024-12-16 02:55:23.046038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:55.043 [2024-12-16 02:55:23.046051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.043 [2024-12-16 02:55:23.046058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:55.043 [2024-12-16 02:55:23.046070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.043 [2024-12-16 02:55:23.046078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:55.043 [2024-12-16 02:55:23.046090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:3360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.043 [2024-12-16 02:55:23.046098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:55.043 [2024-12-16 02:55:23.046110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.043 [2024-12-16 02:55:23.046117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:55.043 [2024-12-16 02:55:23.046129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.043 [2024-12-16 02:55:23.046138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.043 [2024-12-16 02:55:23.046151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:55.043 [2024-12-16 02:55:23.046157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:55.043 10574.67 IOPS, 41.31 MiB/s [2024-12-16T01:55:25.702Z] 10605.75 IOPS, 41.43 MiB/s [2024-12-16T01:55:25.702Z] Received shutdown signal, test time was about 28.722326 seconds 00:33:55.043 00:33:55.043 Latency(us) 00:33:55.043 [2024-12-16T01:55:25.702Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:55.043 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:55.043 Verification LBA range: start 0x0 length 0x4000 00:33:55.043 Nvme0n1 : 28.72 10626.15 41.51 0.00 0.00 12025.13 1201.49 3019898.88 00:33:55.043 [2024-12-16T01:55:25.702Z] =================================================================================================================== 00:33:55.043 [2024-12-16T01:55:25.702Z] Total : 10626.15 41.51 0.00 0.00 12025.13 1201.49 3019898.88 00:33:55.043 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.302 rmmod nvme_tcp 00:33:55.302 rmmod nvme_fabrics 00:33:55.302 rmmod nvme_keyring 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1161168 ']' 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1161168 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1161168 ']' 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1161168 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1161168 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1161168' 00:33:55.302 killing process with pid 1161168 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1161168 00:33:55.302 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1161168 00:33:55.562 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:55.562 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:55.562 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:55.562 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:55.562 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:55.562 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:55.562 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:55.562 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:55.562 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:55.562 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.562 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.562 02:55:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.466 02:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:57.466 00:33:57.466 real 0m40.653s 00:33:57.466 user 1m50.189s 00:33:57.466 sys 0m11.484s 00:33:57.466 02:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:57.466 02:55:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:57.466 ************************************ 00:33:57.466 END TEST nvmf_host_multipath_status 00:33:57.466 ************************************ 00:33:57.466 02:55:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:57.466 02:55:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:57.466 02:55:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:57.466 02:55:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:57.726 ************************************ 00:33:57.726 START TEST nvmf_discovery_remove_ifc 00:33:57.726 ************************************ 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:57.726 * Looking for test storage... 00:33:57.726 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:57.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.726 --rc genhtml_branch_coverage=1 00:33:57.726 --rc genhtml_function_coverage=1 00:33:57.726 --rc genhtml_legend=1 00:33:57.726 --rc geninfo_all_blocks=1 00:33:57.726 --rc geninfo_unexecuted_blocks=1 00:33:57.726 00:33:57.726 ' 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:57.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.726 --rc genhtml_branch_coverage=1 00:33:57.726 --rc genhtml_function_coverage=1 00:33:57.726 --rc genhtml_legend=1 00:33:57.726 --rc geninfo_all_blocks=1 00:33:57.726 --rc geninfo_unexecuted_blocks=1 00:33:57.726 00:33:57.726 ' 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:57.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.726 --rc genhtml_branch_coverage=1 00:33:57.726 --rc genhtml_function_coverage=1 00:33:57.726 --rc genhtml_legend=1 00:33:57.726 --rc geninfo_all_blocks=1 00:33:57.726 --rc geninfo_unexecuted_blocks=1 00:33:57.726 00:33:57.726 ' 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:57.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.726 --rc genhtml_branch_coverage=1 00:33:57.726 --rc genhtml_function_coverage=1 00:33:57.726 --rc genhtml_legend=1 00:33:57.726 --rc geninfo_all_blocks=1 00:33:57.726 --rc geninfo_unexecuted_blocks=1 00:33:57.726 00:33:57.726 ' 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.726 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:57.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:57.727 02:55:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:04.296 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:04.296 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:04.296 Found net devices under 0000:af:00.0: cvl_0_0 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:04.296 Found net devices under 0000:af:00.1: cvl_0_1 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:04.296 02:55:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:04.296 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:04.296 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:04.296 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:04.296 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:04.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:04.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:34:04.297 00:34:04.297 --- 10.0.0.2 ping statistics --- 00:34:04.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.297 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:04.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:04.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:34:04.297 00:34:04.297 --- 10.0.0.1 ping statistics --- 00:34:04.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:04.297 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1169980 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1169980 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1169980 ']' 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:04.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.297 [2024-12-16 02:55:34.338192] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:34:04.297 [2024-12-16 02:55:34.338234] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:04.297 [2024-12-16 02:55:34.415122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.297 [2024-12-16 02:55:34.435766] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:04.297 [2024-12-16 02:55:34.435800] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:04.297 [2024-12-16 02:55:34.435807] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:04.297 [2024-12-16 02:55:34.435813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:04.297 [2024-12-16 02:55:34.435818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:04.297 [2024-12-16 02:55:34.436294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.297 [2024-12-16 02:55:34.573703] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:04.297 [2024-12-16 02:55:34.581866] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:04.297 null0 00:34:04.297 [2024-12-16 02:55:34.613869] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1170001 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1170001 /tmp/host.sock 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1170001 ']' 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:04.297 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.297 [2024-12-16 02:55:34.681945] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:34:04.297 [2024-12-16 02:55:34.681988] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1170001 ] 00:34:04.297 [2024-12-16 02:55:34.756310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.297 [2024-12-16 02:55:34.778624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.297 02:55:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.676 [2024-12-16 02:55:35.970925] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:05.676 [2024-12-16 02:55:35.970948] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:05.676 [2024-12-16 02:55:35.970960] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:05.676 [2024-12-16 02:55:36.057215] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:05.676 [2024-12-16 02:55:36.111780] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:05.676 [2024-12-16 02:55:36.112680] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa69710:1 started. 00:34:05.676 [2024-12-16 02:55:36.113997] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:05.676 [2024-12-16 02:55:36.114037] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:05.676 [2024-12-16 02:55:36.114056] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:05.676 [2024-12-16 02:55:36.114068] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:05.676 [2024-12-16 02:55:36.114085] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:05.676 [2024-12-16 02:55:36.118950] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa69710 was disconnected and freed. delete nvme_qpair. 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:05.676 02:55:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:07.054 02:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:07.054 02:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:07.054 02:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:07.054 02:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.054 02:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:07.054 02:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:07.054 02:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:07.054 02:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.054 02:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:07.054 02:55:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:07.991 02:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:07.991 02:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:07.991 02:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:07.991 02:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:07.991 02:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:07.991 02:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:07.991 02:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:07.991 02:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:07.991 02:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:07.991 02:55:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:08.927 02:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:08.927 02:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:08.927 02:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:08.928 02:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.928 02:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:08.928 02:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.928 02:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:08.928 02:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:08.928 02:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:08.928 02:55:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:09.864 02:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:09.864 02:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:09.864 02:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:09.864 02:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.864 02:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:09.864 02:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.864 02:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:09.864 02:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.864 02:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:09.864 02:55:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:11.240 02:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:11.241 02:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:11.241 02:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:11.241 02:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.241 02:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:11.241 02:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:11.241 02:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:11.241 02:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.241 [2024-12-16 02:55:41.555408] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:11.241 [2024-12-16 02:55:41.555450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.241 [2024-12-16 02:55:41.555460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.241 [2024-12-16 02:55:41.555470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.241 [2024-12-16 02:55:41.555477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.241 [2024-12-16 02:55:41.555484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.241 [2024-12-16 02:55:41.555491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.241 [2024-12-16 02:55:41.555498] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.241 [2024-12-16 02:55:41.555505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.241 [2024-12-16 02:55:41.555512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:11.241 [2024-12-16 02:55:41.555518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:11.241 [2024-12-16 02:55:41.555524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa45ec0 is same with the state(6) to be set 00:34:11.241 02:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:11.241 02:55:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:11.241 [2024-12-16 02:55:41.565430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa45ec0 (9): Bad file descriptor 00:34:11.241 [2024-12-16 02:55:41.575467] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:11.241 [2024-12-16 02:55:41.575478] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:11.241 [2024-12-16 02:55:41.575489] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:11.241 [2024-12-16 02:55:41.575493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:11.241 [2024-12-16 02:55:41.575514] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:12.178 02:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:12.178 02:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:12.178 02:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:12.178 02:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.178 02:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:12.178 02:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.178 02:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:12.178 [2024-12-16 02:55:42.591906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:12.178 [2024-12-16 02:55:42.591988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa45ec0 with addr=10.0.0.2, port=4420 00:34:12.178 [2024-12-16 02:55:42.592022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa45ec0 is same with the state(6) to be set 00:34:12.178 [2024-12-16 02:55:42.592077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa45ec0 (9): Bad file descriptor 00:34:12.178 [2024-12-16 02:55:42.593038] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:34:12.178 [2024-12-16 02:55:42.593102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:12.178 [2024-12-16 02:55:42.593125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:12.178 [2024-12-16 02:55:42.593148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:12.178 [2024-12-16 02:55:42.593168] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:12.178 [2024-12-16 02:55:42.593184] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:12.178 [2024-12-16 02:55:42.593197] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:12.178 [2024-12-16 02:55:42.593219] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:12.178 [2024-12-16 02:55:42.593234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:12.178 02:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.178 02:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:12.178 02:55:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:13.115 [2024-12-16 02:55:43.595744] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:13.115 [2024-12-16 02:55:43.595764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:13.115 [2024-12-16 02:55:43.595775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:13.115 [2024-12-16 02:55:43.595782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:13.115 [2024-12-16 02:55:43.595789] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:34:13.115 [2024-12-16 02:55:43.595800] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:13.115 [2024-12-16 02:55:43.595804] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:13.115 [2024-12-16 02:55:43.595808] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:13.115 [2024-12-16 02:55:43.595829] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:13.115 [2024-12-16 02:55:43.595852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.115 [2024-12-16 02:55:43.595862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.115 [2024-12-16 02:55:43.595871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.115 [2024-12-16 02:55:43.595878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.115 [2024-12-16 02:55:43.595885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.115 [2024-12-16 02:55:43.595892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.115 [2024-12-16 02:55:43.595898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.115 [2024-12-16 02:55:43.595905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.115 [2024-12-16 02:55:43.595912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:13.115 [2024-12-16 02:55:43.595919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:13.115 [2024-12-16 02:55:43.595925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:34:13.115 [2024-12-16 02:55:43.596258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa355e0 (9): Bad file descriptor 00:34:13.115 [2024-12-16 02:55:43.597268] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:13.115 [2024-12-16 02:55:43.597278] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:34:13.115 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:13.115 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.115 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:13.115 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:13.116 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.374 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:13.374 02:55:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:14.311 02:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:14.311 02:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:14.311 02:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:14.311 02:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.311 02:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:14.311 02:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.311 02:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:14.311 02:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.311 02:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:14.311 02:55:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:15.248 [2024-12-16 02:55:45.654993] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:15.248 [2024-12-16 02:55:45.655011] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:15.248 [2024-12-16 02:55:45.655023] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:15.248 [2024-12-16 02:55:45.741276] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:15.248 [2024-12-16 02:55:45.836974] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:34:15.248 [2024-12-16 02:55:45.837555] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xa46260:1 started. 00:34:15.248 [2024-12-16 02:55:45.838532] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:15.248 [2024-12-16 02:55:45.838561] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:15.248 [2024-12-16 02:55:45.838578] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:15.248 [2024-12-16 02:55:45.838590] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:15.248 [2024-12-16 02:55:45.838597] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:15.248 [2024-12-16 02:55:45.843748] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xa46260 was disconnected and freed. delete nvme_qpair. 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1170001 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1170001 ']' 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1170001 00:34:15.248 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:15.507 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:15.507 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1170001 00:34:15.507 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:15.507 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:15.507 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1170001' 00:34:15.507 killing process with pid 1170001 00:34:15.507 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1170001 00:34:15.507 02:55:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1170001 00:34:15.507 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:15.507 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:15.507 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:15.507 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:15.507 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:15.507 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:15.507 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:15.507 rmmod nvme_tcp 00:34:15.507 rmmod nvme_fabrics 00:34:15.507 rmmod nvme_keyring 00:34:15.507 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:15.507 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:15.507 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:15.507 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1169980 ']' 00:34:15.507 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1169980 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1169980 ']' 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1169980 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1169980 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1169980' 00:34:15.766 killing process with pid 1169980 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1169980 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1169980 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:15.766 02:55:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:18.301 00:34:18.301 real 0m20.301s 00:34:18.301 user 0m24.351s 00:34:18.301 sys 0m5.845s 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:18.301 ************************************ 00:34:18.301 END TEST nvmf_discovery_remove_ifc 00:34:18.301 ************************************ 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:18.301 ************************************ 00:34:18.301 START TEST nvmf_identify_kernel_target 00:34:18.301 ************************************ 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:18.301 * Looking for test storage... 00:34:18.301 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:18.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.301 --rc genhtml_branch_coverage=1 00:34:18.301 --rc genhtml_function_coverage=1 00:34:18.301 --rc genhtml_legend=1 00:34:18.301 --rc geninfo_all_blocks=1 00:34:18.301 --rc geninfo_unexecuted_blocks=1 00:34:18.301 00:34:18.301 ' 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:18.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.301 --rc genhtml_branch_coverage=1 00:34:18.301 --rc genhtml_function_coverage=1 00:34:18.301 --rc genhtml_legend=1 00:34:18.301 --rc geninfo_all_blocks=1 00:34:18.301 --rc geninfo_unexecuted_blocks=1 00:34:18.301 00:34:18.301 ' 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:18.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.301 --rc genhtml_branch_coverage=1 00:34:18.301 --rc genhtml_function_coverage=1 00:34:18.301 --rc genhtml_legend=1 00:34:18.301 --rc geninfo_all_blocks=1 00:34:18.301 --rc geninfo_unexecuted_blocks=1 00:34:18.301 00:34:18.301 ' 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:18.301 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:18.301 --rc genhtml_branch_coverage=1 00:34:18.301 --rc genhtml_function_coverage=1 00:34:18.301 --rc genhtml_legend=1 00:34:18.301 --rc geninfo_all_blocks=1 00:34:18.301 --rc geninfo_unexecuted_blocks=1 00:34:18.301 00:34:18.301 ' 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:18.301 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:18.302 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:18.302 02:55:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:24.870 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:24.870 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:24.870 Found net devices under 0000:af:00.0: cvl_0_0 00:34:24.870 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:24.871 Found net devices under 0000:af:00.1: cvl_0_1 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:24.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:24.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:34:24.871 00:34:24.871 --- 10.0.0.2 ping statistics --- 00:34:24.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.871 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:24.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:24.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:34:24.871 00:34:24.871 --- 10.0.0.1 ping statistics --- 00:34:24.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:24.871 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:24.871 02:55:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:26.775 Waiting for block devices as requested 00:34:26.775 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:27.033 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:27.033 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:27.033 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:27.293 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:27.293 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:27.293 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:27.552 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:27.552 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:27.552 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:27.552 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:27.811 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:27.811 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:27.811 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:28.071 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:28.071 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:28.071 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:28.330 No valid GPT data, bailing 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:28.330 00:34:28.330 Discovery Log Number of Records 2, Generation counter 2 00:34:28.330 =====Discovery Log Entry 0====== 00:34:28.330 trtype: tcp 00:34:28.330 adrfam: ipv4 00:34:28.330 subtype: current discovery subsystem 00:34:28.330 treq: not specified, sq flow control disable supported 00:34:28.330 portid: 1 00:34:28.330 trsvcid: 4420 00:34:28.330 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:28.330 traddr: 10.0.0.1 00:34:28.330 eflags: none 00:34:28.330 sectype: none 00:34:28.330 =====Discovery Log Entry 1====== 00:34:28.330 trtype: tcp 00:34:28.330 adrfam: ipv4 00:34:28.330 subtype: nvme subsystem 00:34:28.330 treq: not specified, sq flow control disable supported 00:34:28.330 portid: 1 00:34:28.330 trsvcid: 4420 00:34:28.330 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:28.330 traddr: 10.0.0.1 00:34:28.330 eflags: none 00:34:28.330 sectype: none 00:34:28.330 02:55:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:28.330 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:28.590 ===================================================== 00:34:28.590 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:28.590 ===================================================== 00:34:28.590 Controller Capabilities/Features 00:34:28.590 ================================ 00:34:28.590 Vendor ID: 0000 00:34:28.590 Subsystem Vendor ID: 0000 00:34:28.590 Serial Number: 38323a49c81a871883e1 00:34:28.590 Model Number: Linux 00:34:28.590 Firmware Version: 6.8.9-20 00:34:28.590 Recommended Arb Burst: 0 00:34:28.590 IEEE OUI Identifier: 00 00 00 00:34:28.591 Multi-path I/O 00:34:28.591 May have multiple subsystem ports: No 00:34:28.591 May have multiple controllers: No 00:34:28.591 Associated with SR-IOV VF: No 00:34:28.591 Max Data Transfer Size: Unlimited 00:34:28.591 Max Number of Namespaces: 0 00:34:28.591 Max Number of I/O Queues: 1024 00:34:28.591 NVMe Specification Version (VS): 1.3 00:34:28.591 NVMe Specification Version (Identify): 1.3 00:34:28.591 Maximum Queue Entries: 1024 00:34:28.591 Contiguous Queues Required: No 00:34:28.591 Arbitration Mechanisms Supported 00:34:28.591 Weighted Round Robin: Not Supported 00:34:28.591 Vendor Specific: Not Supported 00:34:28.591 Reset Timeout: 7500 ms 00:34:28.591 Doorbell Stride: 4 bytes 00:34:28.591 NVM Subsystem Reset: Not Supported 00:34:28.591 Command Sets Supported 00:34:28.591 NVM Command Set: Supported 00:34:28.591 Boot Partition: Not Supported 00:34:28.591 Memory Page Size Minimum: 4096 bytes 00:34:28.591 Memory Page Size Maximum: 4096 bytes 00:34:28.591 Persistent Memory Region: Not Supported 00:34:28.591 Optional Asynchronous Events Supported 00:34:28.591 Namespace Attribute Notices: Not Supported 00:34:28.591 Firmware Activation Notices: Not Supported 00:34:28.591 ANA Change Notices: Not Supported 00:34:28.591 PLE Aggregate Log Change Notices: Not Supported 00:34:28.591 LBA Status Info Alert Notices: Not Supported 00:34:28.591 EGE Aggregate Log Change Notices: Not Supported 00:34:28.591 Normal NVM Subsystem Shutdown event: Not Supported 00:34:28.591 Zone Descriptor Change Notices: Not Supported 00:34:28.591 Discovery Log Change Notices: Supported 00:34:28.591 Controller Attributes 00:34:28.591 128-bit Host Identifier: Not Supported 00:34:28.591 Non-Operational Permissive Mode: Not Supported 00:34:28.591 NVM Sets: Not Supported 00:34:28.591 Read Recovery Levels: Not Supported 00:34:28.591 Endurance Groups: Not Supported 00:34:28.591 Predictable Latency Mode: Not Supported 00:34:28.591 Traffic Based Keep ALive: Not Supported 00:34:28.591 Namespace Granularity: Not Supported 00:34:28.591 SQ Associations: Not Supported 00:34:28.591 UUID List: Not Supported 00:34:28.591 Multi-Domain Subsystem: Not Supported 00:34:28.591 Fixed Capacity Management: Not Supported 00:34:28.591 Variable Capacity Management: Not Supported 00:34:28.591 Delete Endurance Group: Not Supported 00:34:28.591 Delete NVM Set: Not Supported 00:34:28.591 Extended LBA Formats Supported: Not Supported 00:34:28.591 Flexible Data Placement Supported: Not Supported 00:34:28.591 00:34:28.591 Controller Memory Buffer Support 00:34:28.591 ================================ 00:34:28.591 Supported: No 00:34:28.591 00:34:28.591 Persistent Memory Region Support 00:34:28.591 ================================ 00:34:28.591 Supported: No 00:34:28.591 00:34:28.591 Admin Command Set Attributes 00:34:28.591 ============================ 00:34:28.591 Security Send/Receive: Not Supported 00:34:28.591 Format NVM: Not Supported 00:34:28.591 Firmware Activate/Download: Not Supported 00:34:28.591 Namespace Management: Not Supported 00:34:28.591 Device Self-Test: Not Supported 00:34:28.591 Directives: Not Supported 00:34:28.591 NVMe-MI: Not Supported 00:34:28.591 Virtualization Management: Not Supported 00:34:28.591 Doorbell Buffer Config: Not Supported 00:34:28.591 Get LBA Status Capability: Not Supported 00:34:28.591 Command & Feature Lockdown Capability: Not Supported 00:34:28.591 Abort Command Limit: 1 00:34:28.591 Async Event Request Limit: 1 00:34:28.591 Number of Firmware Slots: N/A 00:34:28.591 Firmware Slot 1 Read-Only: N/A 00:34:28.591 Firmware Activation Without Reset: N/A 00:34:28.591 Multiple Update Detection Support: N/A 00:34:28.591 Firmware Update Granularity: No Information Provided 00:34:28.591 Per-Namespace SMART Log: No 00:34:28.591 Asymmetric Namespace Access Log Page: Not Supported 00:34:28.591 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:28.591 Command Effects Log Page: Not Supported 00:34:28.591 Get Log Page Extended Data: Supported 00:34:28.591 Telemetry Log Pages: Not Supported 00:34:28.591 Persistent Event Log Pages: Not Supported 00:34:28.591 Supported Log Pages Log Page: May Support 00:34:28.591 Commands Supported & Effects Log Page: Not Supported 00:34:28.591 Feature Identifiers & Effects Log Page:May Support 00:34:28.591 NVMe-MI Commands & Effects Log Page: May Support 00:34:28.591 Data Area 4 for Telemetry Log: Not Supported 00:34:28.591 Error Log Page Entries Supported: 1 00:34:28.591 Keep Alive: Not Supported 00:34:28.591 00:34:28.591 NVM Command Set Attributes 00:34:28.591 ========================== 00:34:28.591 Submission Queue Entry Size 00:34:28.591 Max: 1 00:34:28.591 Min: 1 00:34:28.591 Completion Queue Entry Size 00:34:28.591 Max: 1 00:34:28.591 Min: 1 00:34:28.591 Number of Namespaces: 0 00:34:28.591 Compare Command: Not Supported 00:34:28.591 Write Uncorrectable Command: Not Supported 00:34:28.591 Dataset Management Command: Not Supported 00:34:28.591 Write Zeroes Command: Not Supported 00:34:28.591 Set Features Save Field: Not Supported 00:34:28.591 Reservations: Not Supported 00:34:28.591 Timestamp: Not Supported 00:34:28.591 Copy: Not Supported 00:34:28.591 Volatile Write Cache: Not Present 00:34:28.591 Atomic Write Unit (Normal): 1 00:34:28.591 Atomic Write Unit (PFail): 1 00:34:28.591 Atomic Compare & Write Unit: 1 00:34:28.591 Fused Compare & Write: Not Supported 00:34:28.591 Scatter-Gather List 00:34:28.591 SGL Command Set: Supported 00:34:28.591 SGL Keyed: Not Supported 00:34:28.591 SGL Bit Bucket Descriptor: Not Supported 00:34:28.591 SGL Metadata Pointer: Not Supported 00:34:28.591 Oversized SGL: Not Supported 00:34:28.591 SGL Metadata Address: Not Supported 00:34:28.591 SGL Offset: Supported 00:34:28.591 Transport SGL Data Block: Not Supported 00:34:28.591 Replay Protected Memory Block: Not Supported 00:34:28.591 00:34:28.591 Firmware Slot Information 00:34:28.591 ========================= 00:34:28.591 Active slot: 0 00:34:28.591 00:34:28.591 00:34:28.591 Error Log 00:34:28.591 ========= 00:34:28.591 00:34:28.591 Active Namespaces 00:34:28.591 ================= 00:34:28.591 Discovery Log Page 00:34:28.591 ================== 00:34:28.591 Generation Counter: 2 00:34:28.591 Number of Records: 2 00:34:28.591 Record Format: 0 00:34:28.591 00:34:28.591 Discovery Log Entry 0 00:34:28.591 ---------------------- 00:34:28.591 Transport Type: 3 (TCP) 00:34:28.591 Address Family: 1 (IPv4) 00:34:28.591 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:28.591 Entry Flags: 00:34:28.591 Duplicate Returned Information: 0 00:34:28.591 Explicit Persistent Connection Support for Discovery: 0 00:34:28.591 Transport Requirements: 00:34:28.591 Secure Channel: Not Specified 00:34:28.591 Port ID: 1 (0x0001) 00:34:28.591 Controller ID: 65535 (0xffff) 00:34:28.591 Admin Max SQ Size: 32 00:34:28.591 Transport Service Identifier: 4420 00:34:28.591 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:28.591 Transport Address: 10.0.0.1 00:34:28.591 Discovery Log Entry 1 00:34:28.591 ---------------------- 00:34:28.591 Transport Type: 3 (TCP) 00:34:28.591 Address Family: 1 (IPv4) 00:34:28.591 Subsystem Type: 2 (NVM Subsystem) 00:34:28.591 Entry Flags: 00:34:28.591 Duplicate Returned Information: 0 00:34:28.591 Explicit Persistent Connection Support for Discovery: 0 00:34:28.591 Transport Requirements: 00:34:28.591 Secure Channel: Not Specified 00:34:28.591 Port ID: 1 (0x0001) 00:34:28.591 Controller ID: 65535 (0xffff) 00:34:28.591 Admin Max SQ Size: 32 00:34:28.591 Transport Service Identifier: 4420 00:34:28.591 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:28.591 Transport Address: 10.0.0.1 00:34:28.591 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:28.591 get_feature(0x01) failed 00:34:28.591 get_feature(0x02) failed 00:34:28.591 get_feature(0x04) failed 00:34:28.591 ===================================================== 00:34:28.591 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:28.591 ===================================================== 00:34:28.591 Controller Capabilities/Features 00:34:28.591 ================================ 00:34:28.591 Vendor ID: 0000 00:34:28.591 Subsystem Vendor ID: 0000 00:34:28.591 Serial Number: e42b53d1f8dc7916453f 00:34:28.591 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:28.591 Firmware Version: 6.8.9-20 00:34:28.591 Recommended Arb Burst: 6 00:34:28.591 IEEE OUI Identifier: 00 00 00 00:34:28.591 Multi-path I/O 00:34:28.591 May have multiple subsystem ports: Yes 00:34:28.591 May have multiple controllers: Yes 00:34:28.591 Associated with SR-IOV VF: No 00:34:28.591 Max Data Transfer Size: Unlimited 00:34:28.592 Max Number of Namespaces: 1024 00:34:28.592 Max Number of I/O Queues: 128 00:34:28.592 NVMe Specification Version (VS): 1.3 00:34:28.592 NVMe Specification Version (Identify): 1.3 00:34:28.592 Maximum Queue Entries: 1024 00:34:28.592 Contiguous Queues Required: No 00:34:28.592 Arbitration Mechanisms Supported 00:34:28.592 Weighted Round Robin: Not Supported 00:34:28.592 Vendor Specific: Not Supported 00:34:28.592 Reset Timeout: 7500 ms 00:34:28.592 Doorbell Stride: 4 bytes 00:34:28.592 NVM Subsystem Reset: Not Supported 00:34:28.592 Command Sets Supported 00:34:28.592 NVM Command Set: Supported 00:34:28.592 Boot Partition: Not Supported 00:34:28.592 Memory Page Size Minimum: 4096 bytes 00:34:28.592 Memory Page Size Maximum: 4096 bytes 00:34:28.592 Persistent Memory Region: Not Supported 00:34:28.592 Optional Asynchronous Events Supported 00:34:28.592 Namespace Attribute Notices: Supported 00:34:28.592 Firmware Activation Notices: Not Supported 00:34:28.592 ANA Change Notices: Supported 00:34:28.592 PLE Aggregate Log Change Notices: Not Supported 00:34:28.592 LBA Status Info Alert Notices: Not Supported 00:34:28.592 EGE Aggregate Log Change Notices: Not Supported 00:34:28.592 Normal NVM Subsystem Shutdown event: Not Supported 00:34:28.592 Zone Descriptor Change Notices: Not Supported 00:34:28.592 Discovery Log Change Notices: Not Supported 00:34:28.592 Controller Attributes 00:34:28.592 128-bit Host Identifier: Supported 00:34:28.592 Non-Operational Permissive Mode: Not Supported 00:34:28.592 NVM Sets: Not Supported 00:34:28.592 Read Recovery Levels: Not Supported 00:34:28.592 Endurance Groups: Not Supported 00:34:28.592 Predictable Latency Mode: Not Supported 00:34:28.592 Traffic Based Keep ALive: Supported 00:34:28.592 Namespace Granularity: Not Supported 00:34:28.592 SQ Associations: Not Supported 00:34:28.592 UUID List: Not Supported 00:34:28.592 Multi-Domain Subsystem: Not Supported 00:34:28.592 Fixed Capacity Management: Not Supported 00:34:28.592 Variable Capacity Management: Not Supported 00:34:28.592 Delete Endurance Group: Not Supported 00:34:28.592 Delete NVM Set: Not Supported 00:34:28.592 Extended LBA Formats Supported: Not Supported 00:34:28.592 Flexible Data Placement Supported: Not Supported 00:34:28.592 00:34:28.592 Controller Memory Buffer Support 00:34:28.592 ================================ 00:34:28.592 Supported: No 00:34:28.592 00:34:28.592 Persistent Memory Region Support 00:34:28.592 ================================ 00:34:28.592 Supported: No 00:34:28.592 00:34:28.592 Admin Command Set Attributes 00:34:28.592 ============================ 00:34:28.592 Security Send/Receive: Not Supported 00:34:28.592 Format NVM: Not Supported 00:34:28.592 Firmware Activate/Download: Not Supported 00:34:28.592 Namespace Management: Not Supported 00:34:28.592 Device Self-Test: Not Supported 00:34:28.592 Directives: Not Supported 00:34:28.592 NVMe-MI: Not Supported 00:34:28.592 Virtualization Management: Not Supported 00:34:28.592 Doorbell Buffer Config: Not Supported 00:34:28.592 Get LBA Status Capability: Not Supported 00:34:28.592 Command & Feature Lockdown Capability: Not Supported 00:34:28.592 Abort Command Limit: 4 00:34:28.592 Async Event Request Limit: 4 00:34:28.592 Number of Firmware Slots: N/A 00:34:28.592 Firmware Slot 1 Read-Only: N/A 00:34:28.592 Firmware Activation Without Reset: N/A 00:34:28.592 Multiple Update Detection Support: N/A 00:34:28.592 Firmware Update Granularity: No Information Provided 00:34:28.592 Per-Namespace SMART Log: Yes 00:34:28.592 Asymmetric Namespace Access Log Page: Supported 00:34:28.592 ANA Transition Time : 10 sec 00:34:28.592 00:34:28.592 Asymmetric Namespace Access Capabilities 00:34:28.592 ANA Optimized State : Supported 00:34:28.592 ANA Non-Optimized State : Supported 00:34:28.592 ANA Inaccessible State : Supported 00:34:28.592 ANA Persistent Loss State : Supported 00:34:28.592 ANA Change State : Supported 00:34:28.592 ANAGRPID is not changed : No 00:34:28.592 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:28.592 00:34:28.592 ANA Group Identifier Maximum : 128 00:34:28.592 Number of ANA Group Identifiers : 128 00:34:28.592 Max Number of Allowed Namespaces : 1024 00:34:28.592 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:28.592 Command Effects Log Page: Supported 00:34:28.592 Get Log Page Extended Data: Supported 00:34:28.592 Telemetry Log Pages: Not Supported 00:34:28.592 Persistent Event Log Pages: Not Supported 00:34:28.592 Supported Log Pages Log Page: May Support 00:34:28.592 Commands Supported & Effects Log Page: Not Supported 00:34:28.592 Feature Identifiers & Effects Log Page:May Support 00:34:28.592 NVMe-MI Commands & Effects Log Page: May Support 00:34:28.592 Data Area 4 for Telemetry Log: Not Supported 00:34:28.592 Error Log Page Entries Supported: 128 00:34:28.592 Keep Alive: Supported 00:34:28.592 Keep Alive Granularity: 1000 ms 00:34:28.592 00:34:28.592 NVM Command Set Attributes 00:34:28.592 ========================== 00:34:28.592 Submission Queue Entry Size 00:34:28.592 Max: 64 00:34:28.592 Min: 64 00:34:28.592 Completion Queue Entry Size 00:34:28.592 Max: 16 00:34:28.592 Min: 16 00:34:28.592 Number of Namespaces: 1024 00:34:28.592 Compare Command: Not Supported 00:34:28.592 Write Uncorrectable Command: Not Supported 00:34:28.592 Dataset Management Command: Supported 00:34:28.592 Write Zeroes Command: Supported 00:34:28.592 Set Features Save Field: Not Supported 00:34:28.592 Reservations: Not Supported 00:34:28.592 Timestamp: Not Supported 00:34:28.592 Copy: Not Supported 00:34:28.592 Volatile Write Cache: Present 00:34:28.592 Atomic Write Unit (Normal): 1 00:34:28.592 Atomic Write Unit (PFail): 1 00:34:28.592 Atomic Compare & Write Unit: 1 00:34:28.592 Fused Compare & Write: Not Supported 00:34:28.592 Scatter-Gather List 00:34:28.592 SGL Command Set: Supported 00:34:28.592 SGL Keyed: Not Supported 00:34:28.592 SGL Bit Bucket Descriptor: Not Supported 00:34:28.592 SGL Metadata Pointer: Not Supported 00:34:28.592 Oversized SGL: Not Supported 00:34:28.592 SGL Metadata Address: Not Supported 00:34:28.592 SGL Offset: Supported 00:34:28.592 Transport SGL Data Block: Not Supported 00:34:28.592 Replay Protected Memory Block: Not Supported 00:34:28.592 00:34:28.592 Firmware Slot Information 00:34:28.592 ========================= 00:34:28.592 Active slot: 0 00:34:28.592 00:34:28.592 Asymmetric Namespace Access 00:34:28.592 =========================== 00:34:28.592 Change Count : 0 00:34:28.592 Number of ANA Group Descriptors : 1 00:34:28.592 ANA Group Descriptor : 0 00:34:28.592 ANA Group ID : 1 00:34:28.592 Number of NSID Values : 1 00:34:28.592 Change Count : 0 00:34:28.592 ANA State : 1 00:34:28.592 Namespace Identifier : 1 00:34:28.592 00:34:28.592 Commands Supported and Effects 00:34:28.592 ============================== 00:34:28.592 Admin Commands 00:34:28.592 -------------- 00:34:28.592 Get Log Page (02h): Supported 00:34:28.592 Identify (06h): Supported 00:34:28.592 Abort (08h): Supported 00:34:28.592 Set Features (09h): Supported 00:34:28.592 Get Features (0Ah): Supported 00:34:28.592 Asynchronous Event Request (0Ch): Supported 00:34:28.592 Keep Alive (18h): Supported 00:34:28.592 I/O Commands 00:34:28.592 ------------ 00:34:28.592 Flush (00h): Supported 00:34:28.592 Write (01h): Supported LBA-Change 00:34:28.592 Read (02h): Supported 00:34:28.592 Write Zeroes (08h): Supported LBA-Change 00:34:28.592 Dataset Management (09h): Supported 00:34:28.592 00:34:28.592 Error Log 00:34:28.592 ========= 00:34:28.592 Entry: 0 00:34:28.592 Error Count: 0x3 00:34:28.592 Submission Queue Id: 0x0 00:34:28.592 Command Id: 0x5 00:34:28.592 Phase Bit: 0 00:34:28.592 Status Code: 0x2 00:34:28.592 Status Code Type: 0x0 00:34:28.592 Do Not Retry: 1 00:34:28.592 Error Location: 0x28 00:34:28.592 LBA: 0x0 00:34:28.592 Namespace: 0x0 00:34:28.592 Vendor Log Page: 0x0 00:34:28.592 ----------- 00:34:28.592 Entry: 1 00:34:28.592 Error Count: 0x2 00:34:28.592 Submission Queue Id: 0x0 00:34:28.592 Command Id: 0x5 00:34:28.592 Phase Bit: 0 00:34:28.592 Status Code: 0x2 00:34:28.592 Status Code Type: 0x0 00:34:28.592 Do Not Retry: 1 00:34:28.592 Error Location: 0x28 00:34:28.592 LBA: 0x0 00:34:28.592 Namespace: 0x0 00:34:28.592 Vendor Log Page: 0x0 00:34:28.592 ----------- 00:34:28.592 Entry: 2 00:34:28.592 Error Count: 0x1 00:34:28.592 Submission Queue Id: 0x0 00:34:28.592 Command Id: 0x4 00:34:28.592 Phase Bit: 0 00:34:28.592 Status Code: 0x2 00:34:28.592 Status Code Type: 0x0 00:34:28.592 Do Not Retry: 1 00:34:28.592 Error Location: 0x28 00:34:28.592 LBA: 0x0 00:34:28.592 Namespace: 0x0 00:34:28.592 Vendor Log Page: 0x0 00:34:28.592 00:34:28.592 Number of Queues 00:34:28.592 ================ 00:34:28.592 Number of I/O Submission Queues: 128 00:34:28.592 Number of I/O Completion Queues: 128 00:34:28.592 00:34:28.592 ZNS Specific Controller Data 00:34:28.592 ============================ 00:34:28.593 Zone Append Size Limit: 0 00:34:28.593 00:34:28.593 00:34:28.593 Active Namespaces 00:34:28.593 ================= 00:34:28.593 get_feature(0x05) failed 00:34:28.593 Namespace ID:1 00:34:28.593 Command Set Identifier: NVM (00h) 00:34:28.593 Deallocate: Supported 00:34:28.593 Deallocated/Unwritten Error: Not Supported 00:34:28.593 Deallocated Read Value: Unknown 00:34:28.593 Deallocate in Write Zeroes: Not Supported 00:34:28.593 Deallocated Guard Field: 0xFFFF 00:34:28.593 Flush: Supported 00:34:28.593 Reservation: Not Supported 00:34:28.593 Namespace Sharing Capabilities: Multiple Controllers 00:34:28.593 Size (in LBAs): 1953525168 (931GiB) 00:34:28.593 Capacity (in LBAs): 1953525168 (931GiB) 00:34:28.593 Utilization (in LBAs): 1953525168 (931GiB) 00:34:28.593 UUID: fa27ba8b-ba9a-467f-81f2-b0e394b89fd8 00:34:28.593 Thin Provisioning: Not Supported 00:34:28.593 Per-NS Atomic Units: Yes 00:34:28.593 Atomic Boundary Size (Normal): 0 00:34:28.593 Atomic Boundary Size (PFail): 0 00:34:28.593 Atomic Boundary Offset: 0 00:34:28.593 NGUID/EUI64 Never Reused: No 00:34:28.593 ANA group ID: 1 00:34:28.593 Namespace Write Protected: No 00:34:28.593 Number of LBA Formats: 1 00:34:28.593 Current LBA Format: LBA Format #00 00:34:28.593 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:28.593 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:28.593 rmmod nvme_tcp 00:34:28.593 rmmod nvme_fabrics 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:28.593 02:55:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:31.127 02:56:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:31.127 02:56:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:31.127 02:56:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:31.127 02:56:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:31.127 02:56:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:31.127 02:56:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:31.127 02:56:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:31.127 02:56:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:31.127 02:56:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:31.127 02:56:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:31.127 02:56:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:33.661 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:33.661 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:34.601 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:34.601 00:34:34.601 real 0m16.636s 00:34:34.601 user 0m4.268s 00:34:34.601 sys 0m8.735s 00:34:34.601 02:56:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:34.601 02:56:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:34.601 ************************************ 00:34:34.601 END TEST nvmf_identify_kernel_target 00:34:34.601 ************************************ 00:34:34.601 02:56:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:34.601 02:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:34.601 02:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:34.601 02:56:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.601 ************************************ 00:34:34.601 START TEST nvmf_auth_host 00:34:34.601 ************************************ 00:34:34.601 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:34.862 * Looking for test storage... 00:34:34.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:34.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.862 --rc genhtml_branch_coverage=1 00:34:34.862 --rc genhtml_function_coverage=1 00:34:34.862 --rc genhtml_legend=1 00:34:34.862 --rc geninfo_all_blocks=1 00:34:34.862 --rc geninfo_unexecuted_blocks=1 00:34:34.862 00:34:34.862 ' 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:34.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.862 --rc genhtml_branch_coverage=1 00:34:34.862 --rc genhtml_function_coverage=1 00:34:34.862 --rc genhtml_legend=1 00:34:34.862 --rc geninfo_all_blocks=1 00:34:34.862 --rc geninfo_unexecuted_blocks=1 00:34:34.862 00:34:34.862 ' 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:34.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.862 --rc genhtml_branch_coverage=1 00:34:34.862 --rc genhtml_function_coverage=1 00:34:34.862 --rc genhtml_legend=1 00:34:34.862 --rc geninfo_all_blocks=1 00:34:34.862 --rc geninfo_unexecuted_blocks=1 00:34:34.862 00:34:34.862 ' 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:34.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.862 --rc genhtml_branch_coverage=1 00:34:34.862 --rc genhtml_function_coverage=1 00:34:34.862 --rc genhtml_legend=1 00:34:34.862 --rc geninfo_all_blocks=1 00:34:34.862 --rc geninfo_unexecuted_blocks=1 00:34:34.862 00:34:34.862 ' 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.862 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:34.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:34.863 02:56:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:41.431 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:41.431 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:41.432 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:41.432 Found net devices under 0000:af:00.0: cvl_0_0 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:41.432 Found net devices under 0000:af:00.1: cvl_0_1 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:41.432 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:41.432 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:34:41.432 00:34:41.432 --- 10.0.0.2 ping statistics --- 00:34:41.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.432 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:41.432 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:41.432 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:34:41.432 00:34:41.432 --- 10.0.0.1 ping statistics --- 00:34:41.432 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:41.432 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1181660 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1181660 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1181660 ']' 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:41.432 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ed45e3e248b69b6ec9981bf2d6616964 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.xyV 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ed45e3e248b69b6ec9981bf2d6616964 0 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ed45e3e248b69b6ec9981bf2d6616964 0 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ed45e3e248b69b6ec9981bf2d6616964 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.xyV 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.xyV 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.xyV 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=19ba0cbf6cc31962f513128674a311c0467510aab307193b5187c532f8ef4d4a 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Hyn 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 19ba0cbf6cc31962f513128674a311c0467510aab307193b5187c532f8ef4d4a 3 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 19ba0cbf6cc31962f513128674a311c0467510aab307193b5187c532f8ef4d4a 3 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=19ba0cbf6cc31962f513128674a311c0467510aab307193b5187c532f8ef4d4a 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Hyn 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Hyn 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Hyn 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ebbd428548cb5a7e1a8c5c84eab665e6ebf709b4c07dd1a9 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kZb 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ebbd428548cb5a7e1a8c5c84eab665e6ebf709b4c07dd1a9 0 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ebbd428548cb5a7e1a8c5c84eab665e6ebf709b4c07dd1a9 0 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ebbd428548cb5a7e1a8c5c84eab665e6ebf709b4c07dd1a9 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kZb 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kZb 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.kZb 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ec73753fbe50b8471e78cfb9e85ae54a3fb00035458b2852 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.mXa 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ec73753fbe50b8471e78cfb9e85ae54a3fb00035458b2852 2 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ec73753fbe50b8471e78cfb9e85ae54a3fb00035458b2852 2 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ec73753fbe50b8471e78cfb9e85ae54a3fb00035458b2852 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.mXa 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.mXa 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.mXa 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=50f8e87727b37ebf2e27e4997201d2b1 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.D9r 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 50f8e87727b37ebf2e27e4997201d2b1 1 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 50f8e87727b37ebf2e27e4997201d2b1 1 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=50f8e87727b37ebf2e27e4997201d2b1 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.D9r 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.D9r 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.D9r 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=762fd01bb8d9842addd81292e7746889 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.xKN 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 762fd01bb8d9842addd81292e7746889 1 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 762fd01bb8d9842addd81292e7746889 1 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=762fd01bb8d9842addd81292e7746889 00:34:41.433 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:41.434 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.434 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.xKN 00:34:41.434 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.xKN 00:34:41.434 02:56:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.xKN 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=076c752b528d39de18f7858110ead10fb850a71429073ec4 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.wxH 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 076c752b528d39de18f7858110ead10fb850a71429073ec4 2 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 076c752b528d39de18f7858110ead10fb850a71429073ec4 2 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=076c752b528d39de18f7858110ead10fb850a71429073ec4 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.wxH 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.wxH 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.wxH 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8221f4b9da6518fe1c276d2c6712c312 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pZC 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8221f4b9da6518fe1c276d2c6712c312 0 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8221f4b9da6518fe1c276d2c6712c312 0 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8221f4b9da6518fe1c276d2c6712c312 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:41.434 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pZC 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pZC 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.pZC 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0e0f5281bfeaa95f65f67d25d42a2967af0d8ecdde0f08ab7e462e26a999794a 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.xGa 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0e0f5281bfeaa95f65f67d25d42a2967af0d8ecdde0f08ab7e462e26a999794a 3 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0e0f5281bfeaa95f65f67d25d42a2967af0d8ecdde0f08ab7e462e26a999794a 3 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0e0f5281bfeaa95f65f67d25d42a2967af0d8ecdde0f08ab7e462e26a999794a 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.xGa 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.xGa 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.xGa 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1181660 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1181660 ']' 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:41.743 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.xyV 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Hyn ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Hyn 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.kZb 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.mXa ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mXa 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.D9r 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.xKN ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.xKN 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.wxH 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.pZC ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.pZC 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.xGa 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:42.084 02:56:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:44.617 Waiting for block devices as requested 00:34:44.617 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:44.874 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:44.874 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:44.874 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:44.874 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:45.132 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:45.132 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:45.132 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:45.132 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:45.390 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:45.390 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:45.390 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:45.649 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:45.649 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:45.649 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:45.649 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:45.908 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:46.475 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:46.475 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:46.475 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:46.475 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:46.475 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:46.475 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:46.475 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:46.475 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:46.475 02:56:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:46.475 No valid GPT data, bailing 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:46.475 00:34:46.475 Discovery Log Number of Records 2, Generation counter 2 00:34:46.475 =====Discovery Log Entry 0====== 00:34:46.475 trtype: tcp 00:34:46.475 adrfam: ipv4 00:34:46.475 subtype: current discovery subsystem 00:34:46.475 treq: not specified, sq flow control disable supported 00:34:46.475 portid: 1 00:34:46.475 trsvcid: 4420 00:34:46.475 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:46.475 traddr: 10.0.0.1 00:34:46.475 eflags: none 00:34:46.475 sectype: none 00:34:46.475 =====Discovery Log Entry 1====== 00:34:46.475 trtype: tcp 00:34:46.475 adrfam: ipv4 00:34:46.475 subtype: nvme subsystem 00:34:46.475 treq: not specified, sq flow control disable supported 00:34:46.475 portid: 1 00:34:46.475 trsvcid: 4420 00:34:46.475 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:46.475 traddr: 10.0.0.1 00:34:46.475 eflags: none 00:34:46.475 sectype: none 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:46.475 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.734 nvme0n1 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.734 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.993 nvme0n1 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.993 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.994 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.994 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.994 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.994 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.994 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.994 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.994 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.994 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:46.994 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.994 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.252 nvme0n1 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.252 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.511 nvme0n1 00:34:47.511 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.511 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.511 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.511 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.511 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.511 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.511 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.511 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.511 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.511 02:56:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:47.511 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.512 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.770 nvme0n1 00:34:47.770 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.770 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.770 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.771 nvme0n1 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.771 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.030 nvme0n1 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.030 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.290 nvme0n1 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.290 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.549 02:56:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.549 nvme0n1 00:34:48.549 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.549 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.549 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.549 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.549 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.549 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:34:48.808 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.809 nvme0n1 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.809 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.068 nvme0n1 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.068 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.328 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.586 nvme0n1 00:34:49.586 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.586 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.586 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.586 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.586 02:56:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.586 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.587 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.587 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.587 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.587 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.587 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.587 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.587 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.587 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.587 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.587 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.587 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:49.587 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.587 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.845 nvme0n1 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.845 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.104 nvme0n1 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.104 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.363 nvme0n1 00:34:50.363 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.363 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.363 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.363 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.363 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.363 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.363 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.363 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.363 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.363 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.363 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.363 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.363 02:56:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:50.363 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.363 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.363 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.363 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:50.363 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:50.363 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:50.363 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.363 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.363 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:50.363 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:50.363 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.364 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:50.622 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.622 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.622 nvme0n1 00:34:50.622 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.622 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.622 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.622 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.622 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:50.880 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.881 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.139 nvme0n1 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.139 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.398 02:56:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.657 nvme0n1 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.657 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.225 nvme0n1 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.225 02:56:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.485 nvme0n1 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.485 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.052 nvme0n1 00:34:53.052 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.052 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.053 02:56:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.620 nvme0n1 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:53.620 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.621 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.188 nvme0n1 00:34:54.188 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.188 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.188 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.188 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.188 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.188 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.188 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.188 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.188 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.188 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.447 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.448 02:56:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.014 nvme0n1 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.014 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.015 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.015 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.015 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.015 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.015 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.015 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.015 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:55.015 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.015 02:56:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.583 nvme0n1 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.583 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.151 nvme0n1 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.151 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.410 nvme0n1 00:34:56.410 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.410 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.410 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.410 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.410 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.410 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.410 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.410 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.410 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.410 02:56:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.410 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.411 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.670 nvme0n1 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.670 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.930 nvme0n1 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.930 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.189 nvme0n1 00:34:57.189 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.190 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.449 nvme0n1 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:57.449 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.450 02:56:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.710 nvme0n1 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.710 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.969 nvme0n1 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.969 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.229 nvme0n1 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.229 nvme0n1 00:34:58.229 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.488 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.489 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.489 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.489 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.489 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.489 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.489 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.489 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.489 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.489 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:58.489 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.489 02:56:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.489 nvme0n1 00:34:58.489 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.489 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.489 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.489 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.489 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.748 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.007 nvme0n1 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.007 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.008 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.008 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.008 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.008 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:59.008 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.008 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.267 nvme0n1 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.267 02:56:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.526 nvme0n1 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.526 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.785 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.785 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.785 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.785 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.785 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.785 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.785 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.785 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.785 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.785 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.785 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.785 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.786 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:59.786 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.786 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.045 nvme0n1 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.045 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.304 nvme0n1 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.304 02:56:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.871 nvme0n1 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.872 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.131 nvme0n1 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:01.131 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.132 02:56:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.700 nvme0n1 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.700 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.701 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.701 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.701 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:01.701 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.701 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.959 nvme0n1 00:35:01.959 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.959 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.960 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.960 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.960 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.960 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.219 02:56:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.478 nvme0n1 00:35:02.478 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.479 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.046 nvme0n1 00:35:03.046 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.046 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.046 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.046 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.046 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.046 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.046 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.046 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.046 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.046 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.305 02:56:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.873 nvme0n1 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:03.873 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.874 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.442 nvme0n1 00:35:04.442 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.442 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.442 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.442 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.442 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.442 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.442 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.442 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.442 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.442 02:56:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.442 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.009 nvme0n1 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.009 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.010 02:56:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.577 nvme0n1 00:35:05.577 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.577 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.577 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.577 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.577 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.836 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.837 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:05.837 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.837 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.837 nvme0n1 00:35:05.837 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.837 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.837 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.837 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.837 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.837 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.096 nvme0n1 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.096 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.355 nvme0n1 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:06.355 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.356 02:56:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.615 nvme0n1 00:35:06.615 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.615 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.616 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.875 nvme0n1 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.875 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.135 nvme0n1 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.135 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.394 nvme0n1 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.394 02:56:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.654 nvme0n1 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.654 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.913 nvme0n1 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.913 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.172 nvme0n1 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.172 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.173 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.432 nvme0n1 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.432 02:56:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.432 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.432 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.432 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:08.432 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.432 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.692 nvme0n1 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.692 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.951 nvme0n1 00:35:08.951 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.951 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.951 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.951 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.951 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.951 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.951 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.951 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.951 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.951 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.210 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.210 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.210 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:09.210 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.210 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.210 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.210 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.211 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.470 nvme0n1 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.470 02:56:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.729 nvme0n1 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.729 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.296 nvme0n1 00:35:10.296 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.296 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.296 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.296 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.296 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.296 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.296 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.296 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.296 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.296 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.296 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.296 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.296 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.297 02:56:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.556 nvme0n1 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.556 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.816 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.816 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.816 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.816 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.816 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.816 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.816 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.816 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:10.816 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.816 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.075 nvme0n1 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.075 02:56:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.643 nvme0n1 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.643 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.644 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.644 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.644 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:11.644 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.644 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.903 nvme0n1 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWQ0NWUzZTI0OGI2OWI2ZWM5OTgxYmYyZDY2MTY5NjSFveV2: 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: ]] 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MTliYTBjYmY2Y2MzMTk2MmY1MTMxMjg2NzRhMzExYzA0Njc1MTBhYWIzMDcxOTNiNTE4N2M1MzJmOGVmNGQ0YWgIlBM=: 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.903 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.162 02:56:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.730 nvme0n1 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.730 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.299 nvme0n1 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.299 02:56:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.867 nvme0n1 00:35:13.867 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.867 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.867 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.867 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.867 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.867 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.867 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.867 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.867 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.867 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.867 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.867 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.867 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MDc2Yzc1MmI1MjhkMzlkZTE4Zjc4NTgxMTBlYWQxMGZiODUwYTcxNDI5MDczZWM0rc/TjQ==: 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: ]] 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ODIyMWY0YjlkYTY1MThmZTFjMjc2ZDJjNjcxMmMzMTK2eKvm: 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.868 02:56:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.435 nvme0n1 00:35:14.435 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.435 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.436 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.436 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.436 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.436 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MGUwZjUyODFiZmVhYTk1ZjY1ZjY3ZDI1ZDQyYTI5NjdhZjBkOGVjZGRlMGYwOGFiN2U0NjJlMjZhOTk5Nzk0YQNJYvs=: 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.694 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.262 nvme0n1 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.262 request: 00:35:15.262 { 00:35:15.262 "name": "nvme0", 00:35:15.262 "trtype": "tcp", 00:35:15.262 "traddr": "10.0.0.1", 00:35:15.262 "adrfam": "ipv4", 00:35:15.262 "trsvcid": "4420", 00:35:15.262 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:15.262 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:15.262 "prchk_reftag": false, 00:35:15.262 "prchk_guard": false, 00:35:15.262 "hdgst": false, 00:35:15.262 "ddgst": false, 00:35:15.262 "allow_unrecognized_csi": false, 00:35:15.262 "method": "bdev_nvme_attach_controller", 00:35:15.262 "req_id": 1 00:35:15.262 } 00:35:15.262 Got JSON-RPC error response 00:35:15.262 response: 00:35:15.262 { 00:35:15.262 "code": -5, 00:35:15.262 "message": "Input/output error" 00:35:15.262 } 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.262 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.521 request: 00:35:15.521 { 00:35:15.521 "name": "nvme0", 00:35:15.521 "trtype": "tcp", 00:35:15.521 "traddr": "10.0.0.1", 00:35:15.521 "adrfam": "ipv4", 00:35:15.521 "trsvcid": "4420", 00:35:15.521 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:15.521 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:15.521 "prchk_reftag": false, 00:35:15.521 "prchk_guard": false, 00:35:15.521 "hdgst": false, 00:35:15.521 "ddgst": false, 00:35:15.521 "dhchap_key": "key2", 00:35:15.521 "allow_unrecognized_csi": false, 00:35:15.521 "method": "bdev_nvme_attach_controller", 00:35:15.521 "req_id": 1 00:35:15.521 } 00:35:15.521 Got JSON-RPC error response 00:35:15.521 response: 00:35:15.521 { 00:35:15.521 "code": -5, 00:35:15.521 "message": "Input/output error" 00:35:15.521 } 00:35:15.521 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:15.521 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:15.521 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:15.521 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:15.521 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:15.521 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.521 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.521 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.521 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:15.521 02:56:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.521 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:15.521 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:15.521 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.521 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.521 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.521 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.522 request: 00:35:15.522 { 00:35:15.522 "name": "nvme0", 00:35:15.522 "trtype": "tcp", 00:35:15.522 "traddr": "10.0.0.1", 00:35:15.522 "adrfam": "ipv4", 00:35:15.522 "trsvcid": "4420", 00:35:15.522 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:15.522 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:15.522 "prchk_reftag": false, 00:35:15.522 "prchk_guard": false, 00:35:15.522 "hdgst": false, 00:35:15.522 "ddgst": false, 00:35:15.522 "dhchap_key": "key1", 00:35:15.522 "dhchap_ctrlr_key": "ckey2", 00:35:15.522 "allow_unrecognized_csi": false, 00:35:15.522 "method": "bdev_nvme_attach_controller", 00:35:15.522 "req_id": 1 00:35:15.522 } 00:35:15.522 Got JSON-RPC error response 00:35:15.522 response: 00:35:15.522 { 00:35:15.522 "code": -5, 00:35:15.522 "message": "Input/output error" 00:35:15.522 } 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.522 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.781 nvme0n1 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.781 request: 00:35:15.781 { 00:35:15.781 "name": "nvme0", 00:35:15.781 "dhchap_key": "key1", 00:35:15.781 "dhchap_ctrlr_key": "ckey2", 00:35:15.781 "method": "bdev_nvme_set_keys", 00:35:15.781 "req_id": 1 00:35:15.781 } 00:35:15.781 Got JSON-RPC error response 00:35:15.781 response: 00:35:15.781 { 00:35:15.781 "code": -13, 00:35:15.781 "message": "Permission denied" 00:35:15.781 } 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.781 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.040 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:16.040 02:56:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:16.976 02:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.976 02:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:16.976 02:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.976 02:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.976 02:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.976 02:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:16.976 02:56:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWJiZDQyODU0OGNiNWE3ZTFhOGM1Yzg0ZWFiNjY1ZTZlYmY3MDliNGMwN2RkMWE5KP56/g==: 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: ]] 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWM3Mzc1M2ZiZTUwYjg0NzFlNzhjZmI5ZTg1YWU1NGEzZmIwMDAzNTQ1OGIyODUygWvaOw==: 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.912 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.172 nvme0n1 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBmOGU4NzcyN2IzN2ViZjJlMjdlNDk5NzIwMWQyYjE6jCVU: 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: ]] 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NzYyZmQwMWJiOGQ5ODQyYWRkZDgxMjkyZTc3NDY4ODmw+RG9: 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.172 request: 00:35:18.172 { 00:35:18.172 "name": "nvme0", 00:35:18.172 "dhchap_key": "key2", 00:35:18.172 "dhchap_ctrlr_key": "ckey1", 00:35:18.172 "method": "bdev_nvme_set_keys", 00:35:18.172 "req_id": 1 00:35:18.172 } 00:35:18.172 Got JSON-RPC error response 00:35:18.172 response: 00:35:18.172 { 00:35:18.172 "code": -13, 00:35:18.172 "message": "Permission denied" 00:35:18.172 } 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:18.172 02:56:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:19.550 rmmod nvme_tcp 00:35:19.550 rmmod nvme_fabrics 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1181660 ']' 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1181660 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1181660 ']' 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1181660 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1181660 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1181660' 00:35:19.550 killing process with pid 1181660 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1181660 00:35:19.550 02:56:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1181660 00:35:19.550 02:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:19.550 02:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:19.550 02:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:19.550 02:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:19.550 02:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:35:19.550 02:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:19.550 02:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:35:19.550 02:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:19.550 02:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:19.550 02:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.550 02:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.550 02:56:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:21.570 02:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:21.570 02:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:21.570 02:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:21.570 02:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:21.570 02:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:21.570 02:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:21.570 02:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:21.829 02:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:21.829 02:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:21.829 02:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:21.829 02:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:21.829 02:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:21.829 02:56:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:25.119 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:25.119 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:25.378 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:25.637 02:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.xyV /tmp/spdk.key-null.kZb /tmp/spdk.key-sha256.D9r /tmp/spdk.key-sha384.wxH /tmp/spdk.key-sha512.xGa /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:25.637 02:56:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:28.171 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:28.171 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:28.171 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:28.171 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:28.172 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:28.172 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:28.172 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:28.172 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:28.172 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:28.172 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:28.172 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:28.172 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:28.172 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:28.172 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:28.172 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:28.430 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:28.431 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:28.431 00:35:28.431 real 0m53.737s 00:35:28.431 user 0m48.586s 00:35:28.431 sys 0m12.603s 00:35:28.431 02:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:28.431 02:56:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.431 ************************************ 00:35:28.431 END TEST nvmf_auth_host 00:35:28.431 ************************************ 00:35:28.431 02:56:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:28.431 02:56:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:28.431 02:56:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:28.431 02:56:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:28.431 02:56:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.431 ************************************ 00:35:28.431 START TEST nvmf_digest 00:35:28.431 ************************************ 00:35:28.431 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:28.690 * Looking for test storage... 00:35:28.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:28.690 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:28.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.691 --rc genhtml_branch_coverage=1 00:35:28.691 --rc genhtml_function_coverage=1 00:35:28.691 --rc genhtml_legend=1 00:35:28.691 --rc geninfo_all_blocks=1 00:35:28.691 --rc geninfo_unexecuted_blocks=1 00:35:28.691 00:35:28.691 ' 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:28.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.691 --rc genhtml_branch_coverage=1 00:35:28.691 --rc genhtml_function_coverage=1 00:35:28.691 --rc genhtml_legend=1 00:35:28.691 --rc geninfo_all_blocks=1 00:35:28.691 --rc geninfo_unexecuted_blocks=1 00:35:28.691 00:35:28.691 ' 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:28.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.691 --rc genhtml_branch_coverage=1 00:35:28.691 --rc genhtml_function_coverage=1 00:35:28.691 --rc genhtml_legend=1 00:35:28.691 --rc geninfo_all_blocks=1 00:35:28.691 --rc geninfo_unexecuted_blocks=1 00:35:28.691 00:35:28.691 ' 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:28.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:28.691 --rc genhtml_branch_coverage=1 00:35:28.691 --rc genhtml_function_coverage=1 00:35:28.691 --rc genhtml_legend=1 00:35:28.691 --rc geninfo_all_blocks=1 00:35:28.691 --rc geninfo_unexecuted_blocks=1 00:35:28.691 00:35:28.691 ' 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:28.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:28.691 02:56:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:35.261 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:35.261 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:35.262 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:35.262 Found net devices under 0000:af:00.0: cvl_0_0 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:35.262 Found net devices under 0000:af:00.1: cvl_0_1 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:35.262 02:57:04 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:35.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:35.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:35:35.262 00:35:35.262 --- 10.0.0.2 ping statistics --- 00:35:35.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:35.262 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:35.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:35.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:35:35.262 00:35:35.262 --- 10.0.0.1 ping statistics --- 00:35:35.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:35.262 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:35.262 ************************************ 00:35:35.262 START TEST nvmf_digest_clean 00:35:35.262 ************************************ 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1195370 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1195370 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1195370 ']' 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.262 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:35.262 [2024-12-16 02:57:05.310697] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:35.262 [2024-12-16 02:57:05.310743] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:35.262 [2024-12-16 02:57:05.391652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.262 [2024-12-16 02:57:05.413326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:35.262 [2024-12-16 02:57:05.413361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:35.262 [2024-12-16 02:57:05.413368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:35.262 [2024-12-16 02:57:05.413374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:35.263 [2024-12-16 02:57:05.413379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:35.263 [2024-12-16 02:57:05.413884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:35.263 null0 00:35:35.263 [2024-12-16 02:57:05.580368] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:35.263 [2024-12-16 02:57:05.604541] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1195397 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1195397 /var/tmp/bperf.sock 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1195397 ']' 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:35.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:35.263 [2024-12-16 02:57:05.656913] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:35.263 [2024-12-16 02:57:05.656954] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195397 ] 00:35:35.263 [2024-12-16 02:57:05.729802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.263 [2024-12-16 02:57:05.751855] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:35.263 02:57:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:35.521 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:35.521 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:36.087 nvme0n1 00:35:36.087 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:36.087 02:57:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:36.087 Running I/O for 2 seconds... 00:35:37.958 25510.00 IOPS, 99.65 MiB/s [2024-12-16T01:57:08.617Z] 25407.00 IOPS, 99.25 MiB/s 00:35:37.958 Latency(us) 00:35:37.958 [2024-12-16T01:57:08.617Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:37.958 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:37.958 nvme0n1 : 2.00 25415.04 99.28 0.00 0.00 5030.52 2574.63 15666.22 00:35:37.958 [2024-12-16T01:57:08.617Z] =================================================================================================================== 00:35:37.958 [2024-12-16T01:57:08.617Z] Total : 25415.04 99.28 0.00 0.00 5030.52 2574.63 15666.22 00:35:37.958 { 00:35:37.958 "results": [ 00:35:37.958 { 00:35:37.958 "job": "nvme0n1", 00:35:37.958 "core_mask": "0x2", 00:35:37.958 "workload": "randread", 00:35:37.958 "status": "finished", 00:35:37.958 "queue_depth": 128, 00:35:37.959 "io_size": 4096, 00:35:37.959 "runtime": 2.004561, 00:35:37.959 "iops": 25415.040999001776, 00:35:37.959 "mibps": 99.27750390235069, 00:35:37.959 "io_failed": 0, 00:35:37.959 "io_timeout": 0, 00:35:37.959 "avg_latency_us": 5030.517224400065, 00:35:37.959 "min_latency_us": 2574.6285714285714, 00:35:37.959 "max_latency_us": 15666.224761904761 00:35:37.959 } 00:35:37.959 ], 00:35:37.959 "core_count": 1 00:35:37.959 } 00:35:37.959 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:37.959 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:37.959 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:37.959 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:37.959 | select(.opcode=="crc32c") 00:35:37.959 | "\(.module_name) \(.executed)"' 00:35:37.959 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:38.217 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:38.217 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:38.217 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:38.217 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:38.217 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1195397 00:35:38.217 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1195397 ']' 00:35:38.217 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1195397 00:35:38.218 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:38.218 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:38.218 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195397 00:35:38.218 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:38.218 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:38.218 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195397' 00:35:38.218 killing process with pid 1195397 00:35:38.218 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1195397 00:35:38.218 Received shutdown signal, test time was about 2.000000 seconds 00:35:38.218 00:35:38.218 Latency(us) 00:35:38.218 [2024-12-16T01:57:08.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.218 [2024-12-16T01:57:08.877Z] =================================================================================================================== 00:35:38.218 [2024-12-16T01:57:08.877Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:38.218 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1195397 00:35:38.477 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:38.477 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:38.477 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:38.477 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:38.477 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:38.477 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:38.477 02:57:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:38.477 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1196269 00:35:38.477 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1196269 /var/tmp/bperf.sock 00:35:38.477 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:38.477 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1196269 ']' 00:35:38.477 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:38.477 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:38.477 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:38.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:38.477 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:38.477 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:38.477 [2024-12-16 02:57:09.045271] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:38.477 [2024-12-16 02:57:09.045316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196269 ] 00:35:38.477 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:38.477 Zero copy mechanism will not be used. 00:35:38.477 [2024-12-16 02:57:09.119094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:38.736 [2024-12-16 02:57:09.141710] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:38.736 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:38.736 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:38.736 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:38.736 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:38.736 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:38.995 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:38.995 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:39.254 nvme0n1 00:35:39.254 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:39.254 02:57:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:39.513 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:39.513 Zero copy mechanism will not be used. 00:35:39.513 Running I/O for 2 seconds... 00:35:41.385 5904.00 IOPS, 738.00 MiB/s [2024-12-16T01:57:12.044Z] 5507.50 IOPS, 688.44 MiB/s 00:35:41.385 Latency(us) 00:35:41.385 [2024-12-16T01:57:12.044Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.385 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:41.385 nvme0n1 : 2.00 5510.84 688.85 0.00 0.00 2900.57 628.05 10985.08 00:35:41.385 [2024-12-16T01:57:12.044Z] =================================================================================================================== 00:35:41.385 [2024-12-16T01:57:12.044Z] Total : 5510.84 688.85 0.00 0.00 2900.57 628.05 10985.08 00:35:41.385 { 00:35:41.385 "results": [ 00:35:41.385 { 00:35:41.385 "job": "nvme0n1", 00:35:41.385 "core_mask": "0x2", 00:35:41.385 "workload": "randread", 00:35:41.385 "status": "finished", 00:35:41.385 "queue_depth": 16, 00:35:41.385 "io_size": 131072, 00:35:41.385 "runtime": 2.001693, 00:35:41.385 "iops": 5510.835078106383, 00:35:41.385 "mibps": 688.8543847632978, 00:35:41.385 "io_failed": 0, 00:35:41.385 "io_timeout": 0, 00:35:41.385 "avg_latency_us": 2900.569569222667, 00:35:41.385 "min_latency_us": 628.0533333333333, 00:35:41.385 "max_latency_us": 10985.081904761904 00:35:41.385 } 00:35:41.385 ], 00:35:41.385 "core_count": 1 00:35:41.385 } 00:35:41.385 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:41.385 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:41.385 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:41.385 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:41.385 | select(.opcode=="crc32c") 00:35:41.385 | "\(.module_name) \(.executed)"' 00:35:41.385 02:57:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:41.644 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:41.644 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:41.644 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:41.644 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:41.644 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1196269 00:35:41.645 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1196269 ']' 00:35:41.645 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1196269 00:35:41.645 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:41.645 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:41.645 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196269 00:35:41.645 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:41.645 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:41.645 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196269' 00:35:41.645 killing process with pid 1196269 00:35:41.645 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1196269 00:35:41.645 Received shutdown signal, test time was about 2.000000 seconds 00:35:41.645 00:35:41.645 Latency(us) 00:35:41.645 [2024-12-16T01:57:12.304Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:41.645 [2024-12-16T01:57:12.304Z] =================================================================================================================== 00:35:41.645 [2024-12-16T01:57:12.304Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:41.645 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1196269 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1196919 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1196919 /var/tmp/bperf.sock 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1196919 ']' 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:41.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:41.904 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:41.904 [2024-12-16 02:57:12.449335] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:41.904 [2024-12-16 02:57:12.449382] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196919 ] 00:35:41.904 [2024-12-16 02:57:12.523935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.904 [2024-12-16 02:57:12.545099] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:42.163 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:42.163 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:42.163 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:42.163 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:42.163 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:42.422 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:42.422 02:57:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:42.680 nvme0n1 00:35:42.680 02:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:42.680 02:57:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:42.938 Running I/O for 2 seconds... 00:35:44.809 27661.00 IOPS, 108.05 MiB/s [2024-12-16T01:57:15.468Z] 27682.50 IOPS, 108.13 MiB/s 00:35:44.809 Latency(us) 00:35:44.809 [2024-12-16T01:57:15.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.809 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:44.809 nvme0n1 : 2.01 27685.76 108.15 0.00 0.00 4615.09 1942.67 6366.35 00:35:44.809 [2024-12-16T01:57:15.468Z] =================================================================================================================== 00:35:44.809 [2024-12-16T01:57:15.468Z] Total : 27685.76 108.15 0.00 0.00 4615.09 1942.67 6366.35 00:35:44.809 { 00:35:44.809 "results": [ 00:35:44.809 { 00:35:44.809 "job": "nvme0n1", 00:35:44.809 "core_mask": "0x2", 00:35:44.809 "workload": "randwrite", 00:35:44.809 "status": "finished", 00:35:44.809 "queue_depth": 128, 00:35:44.809 "io_size": 4096, 00:35:44.809 "runtime": 2.005544, 00:35:44.809 "iops": 27685.755086899117, 00:35:44.809 "mibps": 108.14748080819967, 00:35:44.809 "io_failed": 0, 00:35:44.809 "io_timeout": 0, 00:35:44.809 "avg_latency_us": 4615.093708899895, 00:35:44.809 "min_latency_us": 1942.6742857142858, 00:35:44.809 "max_latency_us": 6366.354285714286 00:35:44.809 } 00:35:44.809 ], 00:35:44.809 "core_count": 1 00:35:44.809 } 00:35:44.809 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:44.809 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:44.809 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:44.809 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:44.809 | select(.opcode=="crc32c") 00:35:44.809 | "\(.module_name) \(.executed)"' 00:35:44.809 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1196919 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1196919 ']' 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1196919 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196919 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196919' 00:35:45.067 killing process with pid 1196919 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1196919 00:35:45.067 Received shutdown signal, test time was about 2.000000 seconds 00:35:45.067 00:35:45.067 Latency(us) 00:35:45.067 [2024-12-16T01:57:15.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:45.067 [2024-12-16T01:57:15.726Z] =================================================================================================================== 00:35:45.067 [2024-12-16T01:57:15.726Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:45.067 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1196919 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1197381 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1197381 /var/tmp/bperf.sock 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1197381 ']' 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:45.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.326 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:45.326 [2024-12-16 02:57:15.850638] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:45.326 [2024-12-16 02:57:15.850684] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197381 ] 00:35:45.326 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:45.326 Zero copy mechanism will not be used. 00:35:45.326 [2024-12-16 02:57:15.921830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.326 [2024-12-16 02:57:15.944247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.584 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.584 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:45.584 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:45.584 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:45.584 02:57:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:45.584 02:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:45.843 02:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:46.101 nvme0n1 00:35:46.101 02:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:46.101 02:57:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:46.101 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:46.101 Zero copy mechanism will not be used. 00:35:46.101 Running I/O for 2 seconds... 00:35:48.415 6569.00 IOPS, 821.12 MiB/s [2024-12-16T01:57:19.074Z] 6229.50 IOPS, 778.69 MiB/s 00:35:48.415 Latency(us) 00:35:48.415 [2024-12-16T01:57:19.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.415 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:48.415 nvme0n1 : 2.00 6228.27 778.53 0.00 0.00 2564.87 1934.87 10860.25 00:35:48.415 [2024-12-16T01:57:19.074Z] =================================================================================================================== 00:35:48.415 [2024-12-16T01:57:19.074Z] Total : 6228.27 778.53 0.00 0.00 2564.87 1934.87 10860.25 00:35:48.415 { 00:35:48.415 "results": [ 00:35:48.415 { 00:35:48.415 "job": "nvme0n1", 00:35:48.415 "core_mask": "0x2", 00:35:48.415 "workload": "randwrite", 00:35:48.415 "status": "finished", 00:35:48.415 "queue_depth": 16, 00:35:48.415 "io_size": 131072, 00:35:48.415 "runtime": 2.003446, 00:35:48.415 "iops": 6228.268693041889, 00:35:48.415 "mibps": 778.5335866302361, 00:35:48.415 "io_failed": 0, 00:35:48.415 "io_timeout": 0, 00:35:48.415 "avg_latency_us": 2564.866499057389, 00:35:48.415 "min_latency_us": 1934.872380952381, 00:35:48.415 "max_latency_us": 10860.251428571428 00:35:48.415 } 00:35:48.415 ], 00:35:48.415 "core_count": 1 00:35:48.415 } 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:48.415 | select(.opcode=="crc32c") 00:35:48.415 | "\(.module_name) \(.executed)"' 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1197381 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1197381 ']' 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1197381 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197381 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197381' 00:35:48.415 killing process with pid 1197381 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1197381 00:35:48.415 Received shutdown signal, test time was about 2.000000 seconds 00:35:48.415 00:35:48.415 Latency(us) 00:35:48.415 [2024-12-16T01:57:19.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.415 [2024-12-16T01:57:19.074Z] =================================================================================================================== 00:35:48.415 [2024-12-16T01:57:19.074Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:48.415 02:57:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1197381 00:35:48.674 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1195370 00:35:48.674 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1195370 ']' 00:35:48.674 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1195370 00:35:48.674 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:48.674 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:48.674 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195370 00:35:48.674 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:48.674 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:48.674 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195370' 00:35:48.674 killing process with pid 1195370 00:35:48.674 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1195370 00:35:48.674 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1195370 00:35:48.933 00:35:48.933 real 0m14.106s 00:35:48.933 user 0m26.949s 00:35:48.933 sys 0m4.573s 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:48.933 ************************************ 00:35:48.933 END TEST nvmf_digest_clean 00:35:48.933 ************************************ 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:48.933 ************************************ 00:35:48.933 START TEST nvmf_digest_error 00:35:48.933 ************************************ 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1198076 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1198076 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1198076 ']' 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:48.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:48.933 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:48.933 [2024-12-16 02:57:19.480371] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:48.933 [2024-12-16 02:57:19.480412] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:48.933 [2024-12-16 02:57:19.559463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.933 [2024-12-16 02:57:19.580607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:48.933 [2024-12-16 02:57:19.580645] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:48.933 [2024-12-16 02:57:19.580652] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:48.933 [2024-12-16 02:57:19.580658] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:48.933 [2024-12-16 02:57:19.580663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:48.933 [2024-12-16 02:57:19.581169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:49.193 [2024-12-16 02:57:19.665646] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:49.193 null0 00:35:49.193 [2024-12-16 02:57:19.756707] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:49.193 [2024-12-16 02:57:19.780909] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1198098 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1198098 /var/tmp/bperf.sock 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1198098 ']' 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:49.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:49.193 02:57:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:49.193 [2024-12-16 02:57:19.831632] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:49.193 [2024-12-16 02:57:19.831674] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198098 ] 00:35:49.452 [2024-12-16 02:57:19.905397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.452 [2024-12-16 02:57:19.927596] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.452 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:49.452 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:49.452 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:49.452 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:49.710 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:49.710 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.710 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:49.710 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.710 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:49.710 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:49.969 nvme0n1 00:35:49.969 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:49.969 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.969 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:49.969 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.969 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:49.969 02:57:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:49.969 Running I/O for 2 seconds... 00:35:49.969 [2024-12-16 02:57:20.602167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:49.969 [2024-12-16 02:57:20.602200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:2258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.969 [2024-12-16 02:57:20.602210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:49.969 [2024-12-16 02:57:20.613406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:49.969 [2024-12-16 02:57:20.613442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.969 [2024-12-16 02:57:20.613452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:49.969 [2024-12-16 02:57:20.621883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:49.969 [2024-12-16 02:57:20.621906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:49.969 [2024-12-16 02:57:20.621914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.633300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.633323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.633332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.642400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.642422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.642431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.652630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.652651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.652660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.662532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.662553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.662562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.672753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.672774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.672782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.683216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.683237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.683246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.692815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.692837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.692845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.701537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.701559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.701568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.711697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.711720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.711728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.720444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.720464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:23276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.720473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.730425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.730445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.730453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.738997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.739018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.739030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.748114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.748135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.748143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.757514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.757536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.757544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.767016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.767037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.767045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.776062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.776084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.776092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.784694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.784715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.784723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.794400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.794420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.794428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.804203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.804224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.804232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.812718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.812738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.812746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.822320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.822344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.822352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.831267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.831288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.831296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.840227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.840247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.840255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.850634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.850656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.850663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.229 [2024-12-16 02:57:20.859680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.229 [2024-12-16 02:57:20.859701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.229 [2024-12-16 02:57:20.859708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.230 [2024-12-16 02:57:20.868035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.230 [2024-12-16 02:57:20.868055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.230 [2024-12-16 02:57:20.868064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.230 [2024-12-16 02:57:20.878672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.230 [2024-12-16 02:57:20.878693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.230 [2024-12-16 02:57:20.878701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.489 [2024-12-16 02:57:20.887874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.489 [2024-12-16 02:57:20.887896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.489 [2024-12-16 02:57:20.887905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.489 [2024-12-16 02:57:20.896648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.489 [2024-12-16 02:57:20.896667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.489 [2024-12-16 02:57:20.896679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.489 [2024-12-16 02:57:20.906272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.489 [2024-12-16 02:57:20.906292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.489 [2024-12-16 02:57:20.906300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.489 [2024-12-16 02:57:20.918448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.489 [2024-12-16 02:57:20.918468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.489 [2024-12-16 02:57:20.918478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.489 [2024-12-16 02:57:20.929567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.489 [2024-12-16 02:57:20.929588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.489 [2024-12-16 02:57:20.929596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.489 [2024-12-16 02:57:20.937616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.489 [2024-12-16 02:57:20.937636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.489 [2024-12-16 02:57:20.937644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.489 [2024-12-16 02:57:20.948619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.489 [2024-12-16 02:57:20.948645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.489 [2024-12-16 02:57:20.948653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.489 [2024-12-16 02:57:20.960200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.489 [2024-12-16 02:57:20.960219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.489 [2024-12-16 02:57:20.960227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.489 [2024-12-16 02:57:20.972697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.489 [2024-12-16 02:57:20.972719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.489 [2024-12-16 02:57:20.972727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.489 [2024-12-16 02:57:20.984302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.489 [2024-12-16 02:57:20.984322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:20.984330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:20.992041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:20.992068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:20.992076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.004059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.004079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.004087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.015978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.015999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.016008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.027870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.027891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.027903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.038091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.038114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.038122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.046325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.046346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.046354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.056085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.056106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.056114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.064822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.064844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.064858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.076283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.076305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.076313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.088897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.088918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.088927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.097492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.097513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.097521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.108625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.108646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.108654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.116629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.116651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.116659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.127669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.127689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2616 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.127698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.490 [2024-12-16 02:57:21.139718] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.490 [2024-12-16 02:57:21.139740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.490 [2024-12-16 02:57:21.139748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.151400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.151422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.151431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.159785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.159805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:11041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.159814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.169330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.169352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.169363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.178774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.178795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.178804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.189044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.189065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:8874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.189074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.199094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.199114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.199122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.211356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.211377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.211385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.220265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.220286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.220294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.232985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.233006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.233014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.244949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.244970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.244978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.255253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.255274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.255282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.264252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.264275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:7732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.264284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.276706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.276727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.276735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.284813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.284833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.284841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.294842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.294869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.294878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.304764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.750 [2024-12-16 02:57:21.304784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.750 [2024-12-16 02:57:21.304792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.750 [2024-12-16 02:57:21.314827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.751 [2024-12-16 02:57:21.314854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.751 [2024-12-16 02:57:21.314863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.751 [2024-12-16 02:57:21.323249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.751 [2024-12-16 02:57:21.323269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.751 [2024-12-16 02:57:21.323277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.751 [2024-12-16 02:57:21.332060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.751 [2024-12-16 02:57:21.332080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:18124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.751 [2024-12-16 02:57:21.332089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.751 [2024-12-16 02:57:21.342057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.751 [2024-12-16 02:57:21.342077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:3543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.751 [2024-12-16 02:57:21.342085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.751 [2024-12-16 02:57:21.351115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.751 [2024-12-16 02:57:21.351135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.751 [2024-12-16 02:57:21.351147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.751 [2024-12-16 02:57:21.359882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.751 [2024-12-16 02:57:21.359902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.751 [2024-12-16 02:57:21.359910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.751 [2024-12-16 02:57:21.369884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.751 [2024-12-16 02:57:21.369906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.751 [2024-12-16 02:57:21.369915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.751 [2024-12-16 02:57:21.380837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.751 [2024-12-16 02:57:21.380863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.751 [2024-12-16 02:57:21.380871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.751 [2024-12-16 02:57:21.393380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.751 [2024-12-16 02:57:21.393400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.751 [2024-12-16 02:57:21.393408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:50.751 [2024-12-16 02:57:21.401751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:50.751 [2024-12-16 02:57:21.401772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.751 [2024-12-16 02:57:21.401781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.010 [2024-12-16 02:57:21.413282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.010 [2024-12-16 02:57:21.413304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.010 [2024-12-16 02:57:21.413312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.010 [2024-12-16 02:57:21.426109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.010 [2024-12-16 02:57:21.426130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.010 [2024-12-16 02:57:21.426139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.010 [2024-12-16 02:57:21.434292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.434314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.434326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.446670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.446691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.446699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.458489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.458510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.458518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.467062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.467082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.467090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.479787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.479808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:8875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.479816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.490532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.490553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.490561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.499301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.499322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.499330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.508873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.508894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:18694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.508903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.518630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.518651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.518659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.528447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.528472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.528481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.537739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.537760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.537768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.546652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.546674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.546682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.556071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.556091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.556100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.566487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.566507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.566515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.575304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.575326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.575335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 25186.00 IOPS, 98.38 MiB/s [2024-12-16T01:57:21.670Z] [2024-12-16 02:57:21.587068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.587090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.587098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.595216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.595237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.595245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.606738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.606760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.606771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.618349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.618370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.618378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.627563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.627583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.627591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.637858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.637880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.637888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.650201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.650222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.650231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.011 [2024-12-16 02:57:21.662691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.011 [2024-12-16 02:57:21.662711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:13138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.011 [2024-12-16 02:57:21.662719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.271 [2024-12-16 02:57:21.675120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.271 [2024-12-16 02:57:21.675142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:12219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.271 [2024-12-16 02:57:21.675150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.271 [2024-12-16 02:57:21.685506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.271 [2024-12-16 02:57:21.685527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.271 [2024-12-16 02:57:21.685535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.271 [2024-12-16 02:57:21.694160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.271 [2024-12-16 02:57:21.694180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.271 [2024-12-16 02:57:21.694189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.271 [2024-12-16 02:57:21.706587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.271 [2024-12-16 02:57:21.706612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.271 [2024-12-16 02:57:21.706620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.271 [2024-12-16 02:57:21.714740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.271 [2024-12-16 02:57:21.714761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.714769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.725822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.725843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.725857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.735714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.735735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.735744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.744236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.744257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:24761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.744265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.753706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.753726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.753734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.763694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.763715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.763723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.773178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.773201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.773210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.786022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.786044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.786053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.793969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.793990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.793999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.805826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.805853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.805861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.814180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.814201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.814209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.824730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.824751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.824759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.832731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.832753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.832761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.843243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.843265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.843273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.854913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.854933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.854942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.865501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.865522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.865530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.873563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.873584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.873596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.884238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.884259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.884267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.892148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.892168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.892176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.902188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.902210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.902218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.911753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.911774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.911782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.272 [2024-12-16 02:57:21.920069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.272 [2024-12-16 02:57:21.920089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.272 [2024-12-16 02:57:21.920097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:21.930677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:21.930699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:21.930707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:21.940099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:21.940120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:21.940128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:21.951349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:21.951371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:21.951379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:21.959884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:21.959909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:21.959917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:21.971125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:21.971150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:21.971159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:21.980310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:21.980332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:21.980340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:21.990347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:21.990368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:21.990376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:21.999212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:21.999234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:25184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:21.999242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.011318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.011339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.011348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.022107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.022127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.022136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.030532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.030553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.030561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.040314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.040335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.040343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.050317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.050338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.050346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.058963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.058986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.058994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.068673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.068695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.068703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.079198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.079220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.079228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.088927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.088947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.088955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.098624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.098644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.098652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.107263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.107284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.107291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.116990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.117011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.117019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.128791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.128815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.128827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.139795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.139816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:7918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.139824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.149138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.149159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.149167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.159652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.159675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.532 [2024-12-16 02:57:22.159683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.532 [2024-12-16 02:57:22.171202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.532 [2024-12-16 02:57:22.171224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.533 [2024-12-16 02:57:22.171232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.533 [2024-12-16 02:57:22.179741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.533 [2024-12-16 02:57:22.179763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.533 [2024-12-16 02:57:22.179771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.192421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.792 [2024-12-16 02:57:22.192444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.792 [2024-12-16 02:57:22.192452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.203176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.792 [2024-12-16 02:57:22.203198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.792 [2024-12-16 02:57:22.203206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.215298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.792 [2024-12-16 02:57:22.215320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.792 [2024-12-16 02:57:22.215328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.225564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.792 [2024-12-16 02:57:22.225584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:19268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.792 [2024-12-16 02:57:22.225592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.234112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.792 [2024-12-16 02:57:22.234132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.792 [2024-12-16 02:57:22.234140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.245186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.792 [2024-12-16 02:57:22.245207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.792 [2024-12-16 02:57:22.245215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.253938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.792 [2024-12-16 02:57:22.253969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.792 [2024-12-16 02:57:22.253977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.266714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.792 [2024-12-16 02:57:22.266734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.792 [2024-12-16 02:57:22.266742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.278965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.792 [2024-12-16 02:57:22.278986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:2460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.792 [2024-12-16 02:57:22.278994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.290206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.792 [2024-12-16 02:57:22.290228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.792 [2024-12-16 02:57:22.290236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.299568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.792 [2024-12-16 02:57:22.299589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.792 [2024-12-16 02:57:22.299597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.311862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.792 [2024-12-16 02:57:22.311899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.792 [2024-12-16 02:57:22.311914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.320953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.792 [2024-12-16 02:57:22.320974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.792 [2024-12-16 02:57:22.320982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.792 [2024-12-16 02:57:22.332697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.793 [2024-12-16 02:57:22.332718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.793 [2024-12-16 02:57:22.332726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.793 [2024-12-16 02:57:22.344472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.793 [2024-12-16 02:57:22.344493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.793 [2024-12-16 02:57:22.344502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.793 [2024-12-16 02:57:22.355490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.793 [2024-12-16 02:57:22.355510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.793 [2024-12-16 02:57:22.355518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.793 [2024-12-16 02:57:22.364171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.793 [2024-12-16 02:57:22.364193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.793 [2024-12-16 02:57:22.364201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.793 [2024-12-16 02:57:22.374500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.793 [2024-12-16 02:57:22.374521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.793 [2024-12-16 02:57:22.374530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.793 [2024-12-16 02:57:22.384235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.793 [2024-12-16 02:57:22.384256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.793 [2024-12-16 02:57:22.384264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.793 [2024-12-16 02:57:22.392726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.793 [2024-12-16 02:57:22.392747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.793 [2024-12-16 02:57:22.392755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.793 [2024-12-16 02:57:22.402414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.793 [2024-12-16 02:57:22.402439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.793 [2024-12-16 02:57:22.402447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.793 [2024-12-16 02:57:22.414044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.793 [2024-12-16 02:57:22.414065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.793 [2024-12-16 02:57:22.414073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.793 [2024-12-16 02:57:22.426488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.793 [2024-12-16 02:57:22.426509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:10919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.793 [2024-12-16 02:57:22.426517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.793 [2024-12-16 02:57:22.434808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.793 [2024-12-16 02:57:22.434828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.793 [2024-12-16 02:57:22.434836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:51.793 [2024-12-16 02:57:22.446325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:51.793 [2024-12-16 02:57:22.446346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.793 [2024-12-16 02:57:22.446354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.052 [2024-12-16 02:57:22.458585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:52.052 [2024-12-16 02:57:22.458606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.052 [2024-12-16 02:57:22.458614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.052 [2024-12-16 02:57:22.471130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:52.052 [2024-12-16 02:57:22.471152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:25228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.052 [2024-12-16 02:57:22.471160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.052 [2024-12-16 02:57:22.483727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:52.052 [2024-12-16 02:57:22.483749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.052 [2024-12-16 02:57:22.483757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.052 [2024-12-16 02:57:22.491790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:52.052 [2024-12-16 02:57:22.491811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.052 [2024-12-16 02:57:22.491818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.052 [2024-12-16 02:57:22.503622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:52.053 [2024-12-16 02:57:22.503643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.053 [2024-12-16 02:57:22.503651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.053 [2024-12-16 02:57:22.516555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:52.053 [2024-12-16 02:57:22.516577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.053 [2024-12-16 02:57:22.516585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.053 [2024-12-16 02:57:22.529212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:52.053 [2024-12-16 02:57:22.529233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:24187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.053 [2024-12-16 02:57:22.529241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.053 [2024-12-16 02:57:22.537692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:52.053 [2024-12-16 02:57:22.537712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.053 [2024-12-16 02:57:22.537720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.053 [2024-12-16 02:57:22.550037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:52.053 [2024-12-16 02:57:22.550057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.053 [2024-12-16 02:57:22.550065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.053 [2024-12-16 02:57:22.562612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:52.053 [2024-12-16 02:57:22.562634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.053 [2024-12-16 02:57:22.562642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.053 [2024-12-16 02:57:22.573821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:52.053 [2024-12-16 02:57:22.573843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.053 [2024-12-16 02:57:22.573857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.053 [2024-12-16 02:57:22.587692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xf036e0) 00:35:52.053 [2024-12-16 02:57:22.587714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1814 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.053 [2024-12-16 02:57:22.587722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:52.053 24853.50 IOPS, 97.08 MiB/s 00:35:52.053 Latency(us) 00:35:52.053 [2024-12-16T01:57:22.712Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.053 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:52.053 nvme0n1 : 2.00 24878.50 97.18 0.00 0.00 5140.60 2309.36 17725.93 00:35:52.053 [2024-12-16T01:57:22.712Z] =================================================================================================================== 00:35:52.053 [2024-12-16T01:57:22.712Z] Total : 24878.50 97.18 0.00 0.00 5140.60 2309.36 17725.93 00:35:52.053 { 00:35:52.053 "results": [ 00:35:52.053 { 00:35:52.053 "job": "nvme0n1", 00:35:52.053 "core_mask": "0x2", 00:35:52.053 "workload": "randread", 00:35:52.053 "status": "finished", 00:35:52.053 "queue_depth": 128, 00:35:52.053 "io_size": 4096, 00:35:52.053 "runtime": 2.003939, 00:35:52.053 "iops": 24878.50179072317, 00:35:52.053 "mibps": 97.18164762001238, 00:35:52.053 "io_failed": 0, 00:35:52.053 "io_timeout": 0, 00:35:52.053 "avg_latency_us": 5140.604860992115, 00:35:52.053 "min_latency_us": 2309.3638095238093, 00:35:52.053 "max_latency_us": 17725.92761904762 00:35:52.053 } 00:35:52.053 ], 00:35:52.053 "core_count": 1 00:35:52.053 } 00:35:52.053 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:52.053 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:52.053 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:52.053 | .driver_specific 00:35:52.053 | .nvme_error 00:35:52.053 | .status_code 00:35:52.053 | .command_transient_transport_error' 00:35:52.053 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:52.312 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:35:52.312 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1198098 00:35:52.312 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1198098 ']' 00:35:52.312 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1198098 00:35:52.312 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:52.312 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.312 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1198098 00:35:52.312 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:52.312 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:52.312 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1198098' 00:35:52.312 killing process with pid 1198098 00:35:52.312 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1198098 00:35:52.312 Received shutdown signal, test time was about 2.000000 seconds 00:35:52.312 00:35:52.312 Latency(us) 00:35:52.312 [2024-12-16T01:57:22.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.312 [2024-12-16T01:57:22.971Z] =================================================================================================================== 00:35:52.312 [2024-12-16T01:57:22.971Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:52.312 02:57:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1198098 00:35:52.571 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:52.571 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:52.571 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:52.571 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:52.571 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:52.571 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1198561 00:35:52.571 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1198561 /var/tmp/bperf.sock 00:35:52.571 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:52.572 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1198561 ']' 00:35:52.572 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:52.572 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:52.572 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:52.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:52.572 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:52.572 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:52.572 [2024-12-16 02:57:23.061297] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:52.572 [2024-12-16 02:57:23.061348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198561 ] 00:35:52.572 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:52.572 Zero copy mechanism will not be used. 00:35:52.572 [2024-12-16 02:57:23.133801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.572 [2024-12-16 02:57:23.155259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.830 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:52.830 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:52.830 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:52.830 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:52.830 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:52.830 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.830 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:52.830 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.830 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:52.830 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:53.399 nvme0n1 00:35:53.399 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:53.399 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.399 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:53.399 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.400 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:53.400 02:57:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:53.400 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:53.400 Zero copy mechanism will not be used. 00:35:53.400 Running I/O for 2 seconds... 00:35:53.400 [2024-12-16 02:57:23.905615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.905650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.905661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.910990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.911016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.911025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.916319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.916342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.916350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.921788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.921811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.921819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.927095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.927119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.927128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.930639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.930661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.930669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.935333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.935355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.935364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.940743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.940766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.940774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.946034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.946057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.946066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.951764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.951785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.951794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.957177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.957199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.957207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.962762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.962784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.962791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.968142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.968164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.968173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.973635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.973657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.973666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.979072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.979094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.979103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.984390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.984412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.984420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.989946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.989968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.989980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:23.995141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:23.995169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:23.995177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:24.000510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:24.000532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:24.000541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:24.005405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:24.005428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:24.005436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:24.010681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:24.010703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:24.010711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:24.016233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:24.016257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:24.016265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:24.022007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:24.022029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:24.022037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:24.027350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:24.027373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:24.027381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:24.032954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:24.032977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:24.032986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.400 [2024-12-16 02:57:24.038765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.400 [2024-12-16 02:57:24.038791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.400 [2024-12-16 02:57:24.038799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.401 [2024-12-16 02:57:24.044493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.401 [2024-12-16 02:57:24.044516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.401 [2024-12-16 02:57:24.044524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.401 [2024-12-16 02:57:24.050052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.401 [2024-12-16 02:57:24.050075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.401 [2024-12-16 02:57:24.050082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.401 [2024-12-16 02:57:24.055371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.401 [2024-12-16 02:57:24.055394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.401 [2024-12-16 02:57:24.055403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.061195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.061218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.061226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.066661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.066683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.066691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.072145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.072167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.072175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.077680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.077706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.077714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.084027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.084049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.084057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.092175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.092199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.092207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.099525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.099550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.099560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.106723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.106746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.106754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.113794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.113819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.113829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.120967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.120991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.121000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.129024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.129047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.129055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.135893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.135914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.135923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.139990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.140011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.140019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.147153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.147176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.147192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.152934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.152956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.152965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.158315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.158337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.158345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.163407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.163429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.163437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.168712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.168733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.168741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.173559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.173581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.173589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.178795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.178817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.178826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.184132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.184159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.184167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.189418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.189439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.189447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.194818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.194843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.194857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.661 [2024-12-16 02:57:24.200182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.661 [2024-12-16 02:57:24.200203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.661 [2024-12-16 02:57:24.200211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.205454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.205476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.205483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.210785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.210806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.210814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.216989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.217011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.217020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.224419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.224442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.224450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.231780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.231803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.231811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.239532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.239555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.239563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.247648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.247671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.247679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.255056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.255078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.255086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.262550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.262572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.262580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.271035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.271057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.271065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.278528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.278550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.278558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.285954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.285977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.285986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.294092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.294115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.294123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.302200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.302223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.302231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.309665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.309691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.309699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.662 [2024-12-16 02:57:24.317967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.662 [2024-12-16 02:57:24.317990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.662 [2024-12-16 02:57:24.318001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.922 [2024-12-16 02:57:24.325947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.922 [2024-12-16 02:57:24.325970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.922 [2024-12-16 02:57:24.325979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.922 [2024-12-16 02:57:24.333538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.922 [2024-12-16 02:57:24.333560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.922 [2024-12-16 02:57:24.333568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.922 [2024-12-16 02:57:24.340532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.922 [2024-12-16 02:57:24.340555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.922 [2024-12-16 02:57:24.340563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.922 [2024-12-16 02:57:24.348035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.922 [2024-12-16 02:57:24.348058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.922 [2024-12-16 02:57:24.348066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.922 [2024-12-16 02:57:24.354986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.922 [2024-12-16 02:57:24.355009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.922 [2024-12-16 02:57:24.355017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.922 [2024-12-16 02:57:24.361746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.922 [2024-12-16 02:57:24.361768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.922 [2024-12-16 02:57:24.361776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.922 [2024-12-16 02:57:24.368590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.922 [2024-12-16 02:57:24.368613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.922 [2024-12-16 02:57:24.368622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.922 [2024-12-16 02:57:24.377387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.922 [2024-12-16 02:57:24.377411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.922 [2024-12-16 02:57:24.377420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.922 [2024-12-16 02:57:24.384696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.922 [2024-12-16 02:57:24.384719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.922 [2024-12-16 02:57:24.384728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.392320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.392343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.392351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.400358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.400382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.400390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.407805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.407828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.407837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.415191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.415212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.415221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.422834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.422862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.422871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.430669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.430691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.430699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.437930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.437962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.437970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.445459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.445481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.445492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.453034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.453056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.453065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.460613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.460635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.460644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.468171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.468193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.468201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.475745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.475767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.475775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.483224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.483246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.483254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.490703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.490725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.490733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.498303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.498326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.498334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.505639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.505661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.505669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.513643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.513669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.513678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.520909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.520932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.520940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.528236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.528258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.528266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.535865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.535886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.535894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.543669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.543691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.543700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.551237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.551259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.551268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.558631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.558652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.558661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.566624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.566647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.566655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:53.923 [2024-12-16 02:57:24.574086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:53.923 [2024-12-16 02:57:24.574108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:53.923 [2024-12-16 02:57:24.574117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.582299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.582322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.582331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.589904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.589927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.589936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.597373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.597395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.597403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.605610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.605633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.605641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.613951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.613973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.613982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.621911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.621934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.621943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.630834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.630864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.630872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.639256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.639280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.639288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.648048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.648070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.648083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.656133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.656158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.656166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.664363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.664386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.664395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.673321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.673344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.673352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.680696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.680718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.680727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.688876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.688897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.688905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.697291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.697313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.697322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.704140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.704164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.704173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.711102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.711126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.711134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.718546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.718573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.718582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.725150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.725174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.725183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.733144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.733167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.733176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.741133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.741156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.741165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.747371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.747394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.747403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.753166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.184 [2024-12-16 02:57:24.753189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.184 [2024-12-16 02:57:24.753197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.184 [2024-12-16 02:57:24.758754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.758776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.758784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.763941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.763963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.763970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.768983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.769005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.769012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.773939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.773962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.773969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.778962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.778983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.778991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.783976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.783997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.784005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.788984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.789005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.789013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.793927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.793948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.793956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.798903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.798925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.798933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.803816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.803838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.803852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.808785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.808806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.808815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.813841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.813872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.813880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.819073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.819095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.819103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.824280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.824308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.824315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.829422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.829444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.829452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.834555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.834577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.834585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.185 [2024-12-16 02:57:24.839680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.185 [2024-12-16 02:57:24.839703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.185 [2024-12-16 02:57:24.839711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.445 [2024-12-16 02:57:24.844874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.445 [2024-12-16 02:57:24.844896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.445 [2024-12-16 02:57:24.844904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.445 [2024-12-16 02:57:24.850155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.445 [2024-12-16 02:57:24.850176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.445 [2024-12-16 02:57:24.850183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.445 [2024-12-16 02:57:24.855345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.445 [2024-12-16 02:57:24.855367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.445 [2024-12-16 02:57:24.855375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.445 [2024-12-16 02:57:24.860506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.445 [2024-12-16 02:57:24.860527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.445 [2024-12-16 02:57:24.860536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.445 [2024-12-16 02:57:24.865659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.445 [2024-12-16 02:57:24.865681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.445 [2024-12-16 02:57:24.865689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.445 [2024-12-16 02:57:24.870829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.870858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.870866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.876033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.876055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.876063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.881272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.881294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.881303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.886491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.886513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.886520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.891777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.891799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.891808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.446 4742.00 IOPS, 592.75 MiB/s [2024-12-16T01:57:25.105Z] [2024-12-16 02:57:24.898237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.898256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.898265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.903604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.903626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.903639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.908845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.908875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.908883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.914399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.914422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.914431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.921607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.921630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.921638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.928778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.928801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.928809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.935196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.935218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.935227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.942729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.942752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.942761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.949208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.949230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.949239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.957032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.957055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.957063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.964878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.964904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.964912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.972340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.972364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.972373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.979819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.979844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.979859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.987934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.987957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.987967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:24.995676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:24.995699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:24.995708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:25.002899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:25.002923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:25.002932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:25.009781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:25.009804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:25.009813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:25.017958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:25.017983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:25.017992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:25.025431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:25.025455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:25.025464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:25.033229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:25.033253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:25.033263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:25.040715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:25.040739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:25.040748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:25.048238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:25.048261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.446 [2024-12-16 02:57:25.048270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.446 [2024-12-16 02:57:25.056169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.446 [2024-12-16 02:57:25.056191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.447 [2024-12-16 02:57:25.056200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.447 [2024-12-16 02:57:25.064107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.447 [2024-12-16 02:57:25.064131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.447 [2024-12-16 02:57:25.064139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.447 [2024-12-16 02:57:25.071433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.447 [2024-12-16 02:57:25.071456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.447 [2024-12-16 02:57:25.071464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.447 [2024-12-16 02:57:25.078943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.447 [2024-12-16 02:57:25.078965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.447 [2024-12-16 02:57:25.078974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.447 [2024-12-16 02:57:25.085145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.447 [2024-12-16 02:57:25.085167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.447 [2024-12-16 02:57:25.085175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.447 [2024-12-16 02:57:25.091369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.447 [2024-12-16 02:57:25.091395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.447 [2024-12-16 02:57:25.091403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.447 [2024-12-16 02:57:25.098000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.447 [2024-12-16 02:57:25.098023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.447 [2024-12-16 02:57:25.098031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.705 [2024-12-16 02:57:25.104648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.705 [2024-12-16 02:57:25.104670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.705 [2024-12-16 02:57:25.104678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.705 [2024-12-16 02:57:25.110303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.705 [2024-12-16 02:57:25.110325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.705 [2024-12-16 02:57:25.110333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.705 [2024-12-16 02:57:25.113527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.705 [2024-12-16 02:57:25.113548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.705 [2024-12-16 02:57:25.113556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.705 [2024-12-16 02:57:25.119058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.705 [2024-12-16 02:57:25.119081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.705 [2024-12-16 02:57:25.119089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.705 [2024-12-16 02:57:25.124259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.705 [2024-12-16 02:57:25.124281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.705 [2024-12-16 02:57:25.124289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.705 [2024-12-16 02:57:25.129438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.705 [2024-12-16 02:57:25.129460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.129468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.135841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.135869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.135877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.143370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.143392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.143401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.150332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.150354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.150362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.156075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.156098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.156106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.161930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.161952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.161961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.167504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.167526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.167534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.173818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.173840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.173855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.181132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.181154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.181162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.187854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.187876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.187885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.194002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.194023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.194035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.200627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.200649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.200657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.207083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.207105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.207112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.213195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.213217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.213225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.219357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.219379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.219387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.225858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.225880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.225888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.231552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.231573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.231581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.237743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.237765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.237774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.245038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.245061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.245069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.252136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.252162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.252171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.258716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.258739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.258747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.265632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.265654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.265663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.273746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.273769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.273778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.280351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.280373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.280381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.286833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.286861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.286870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.293531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.293553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.293561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.300894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.300916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.300924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.308485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.308507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.308516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.316438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.316460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.316468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.324123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.324145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.324154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.332232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.332255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.332264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.340415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.340438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.340447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.348430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.348453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.348462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.355840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.355869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.355894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.706 [2024-12-16 02:57:25.362836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.706 [2024-12-16 02:57:25.362865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.706 [2024-12-16 02:57:25.362874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.965 [2024-12-16 02:57:25.368442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.965 [2024-12-16 02:57:25.368464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.965 [2024-12-16 02:57:25.368473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.965 [2024-12-16 02:57:25.373923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.965 [2024-12-16 02:57:25.373954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.965 [2024-12-16 02:57:25.373966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.965 [2024-12-16 02:57:25.379386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.965 [2024-12-16 02:57:25.379408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.965 [2024-12-16 02:57:25.379416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.965 [2024-12-16 02:57:25.384689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.965 [2024-12-16 02:57:25.384711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.965 [2024-12-16 02:57:25.384719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.965 [2024-12-16 02:57:25.390173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.965 [2024-12-16 02:57:25.390195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.965 [2024-12-16 02:57:25.390203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.395616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.395639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.395646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.401395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.401416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.401424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.406940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.406961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.406969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.412246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.412268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.412276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.417447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.417470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.417478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.423242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.423264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.423272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.428715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.428737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.428745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.434932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.434955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.434963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.443134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.443158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.443166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.450749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.450772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.450781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.456905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.456928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.456937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.462510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.462533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.462541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.467910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.467932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.467941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.473652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.473673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.473685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.477138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.477159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.477168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.484891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.484914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.484922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.492109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.492131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.492140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.498830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.498859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.498868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.506431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.506452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.506460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.514140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.514163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.514172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.521867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.521890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.521898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.529885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.529909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.529917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.538027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.538054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.538062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.545696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.545718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.545727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.552680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.552703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.552711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.557966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.557987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.557996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.563186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.966 [2024-12-16 02:57:25.563208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.966 [2024-12-16 02:57:25.563216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.966 [2024-12-16 02:57:25.568449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.967 [2024-12-16 02:57:25.568471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.967 [2024-12-16 02:57:25.568479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.967 [2024-12-16 02:57:25.573703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.967 [2024-12-16 02:57:25.573725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.967 [2024-12-16 02:57:25.573733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.967 [2024-12-16 02:57:25.578946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.967 [2024-12-16 02:57:25.578968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.967 [2024-12-16 02:57:25.578976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.967 [2024-12-16 02:57:25.584102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.967 [2024-12-16 02:57:25.584123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.967 [2024-12-16 02:57:25.584131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.967 [2024-12-16 02:57:25.589253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.967 [2024-12-16 02:57:25.589274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.967 [2024-12-16 02:57:25.589282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.967 [2024-12-16 02:57:25.594374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.967 [2024-12-16 02:57:25.594395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.967 [2024-12-16 02:57:25.594403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.967 [2024-12-16 02:57:25.599429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.967 [2024-12-16 02:57:25.599450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.967 [2024-12-16 02:57:25.599458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:54.967 [2024-12-16 02:57:25.604568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.967 [2024-12-16 02:57:25.604590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.967 [2024-12-16 02:57:25.604599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:54.967 [2024-12-16 02:57:25.609755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.967 [2024-12-16 02:57:25.609775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.967 [2024-12-16 02:57:25.609783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:54.967 [2024-12-16 02:57:25.614868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.967 [2024-12-16 02:57:25.614889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.967 [2024-12-16 02:57:25.614897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:54.967 [2024-12-16 02:57:25.620028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:54.967 [2024-12-16 02:57:25.620050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:54.967 [2024-12-16 02:57:25.620058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.625295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.227 [2024-12-16 02:57:25.625317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.227 [2024-12-16 02:57:25.625325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.630492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.227 [2024-12-16 02:57:25.630512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.227 [2024-12-16 02:57:25.630524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.635628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.227 [2024-12-16 02:57:25.635649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.227 [2024-12-16 02:57:25.635657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.640800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.227 [2024-12-16 02:57:25.640821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.227 [2024-12-16 02:57:25.640829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.645834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.227 [2024-12-16 02:57:25.645862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.227 [2024-12-16 02:57:25.645870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.651037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.227 [2024-12-16 02:57:25.651058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.227 [2024-12-16 02:57:25.651067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.656216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.227 [2024-12-16 02:57:25.656237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.227 [2024-12-16 02:57:25.656245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.661406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.227 [2024-12-16 02:57:25.661427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.227 [2024-12-16 02:57:25.661434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.666575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.227 [2024-12-16 02:57:25.666596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.227 [2024-12-16 02:57:25.666604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.671713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.227 [2024-12-16 02:57:25.671735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.227 [2024-12-16 02:57:25.671742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.676916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.227 [2024-12-16 02:57:25.676940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.227 [2024-12-16 02:57:25.676948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.682073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.227 [2024-12-16 02:57:25.682095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.227 [2024-12-16 02:57:25.682102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.687279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.227 [2024-12-16 02:57:25.687300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.227 [2024-12-16 02:57:25.687308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.227 [2024-12-16 02:57:25.692465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.692487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.692495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.697684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.697705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.697713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.702907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.702928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.702936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.708068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.708100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.708108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.713284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.713305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.713312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.718510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.718531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.718538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.723791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.723813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.723820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.728941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.728962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.728969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.734040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.734061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.734069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.739198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.739219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.739227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.744385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.744406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.744414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.749575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.749595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.749604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.754640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.754661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.754669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.759786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.759807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.759815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.764977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.764997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.765009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.770195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.770216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.770224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.775197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.775218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.775226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.780347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.780369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.780376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.785548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.785568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.785577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.790762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.790784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.790792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.795945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.795966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.795974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.801073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.801095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.801103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.806235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.806255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.806263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.811457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.811479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.811486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.816620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.816640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.816648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.821675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.821696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.821704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.826902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.826923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.826931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.832099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.228 [2024-12-16 02:57:25.832120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.228 [2024-12-16 02:57:25.832128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.228 [2024-12-16 02:57:25.837126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.229 [2024-12-16 02:57:25.837147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.229 [2024-12-16 02:57:25.837155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.229 [2024-12-16 02:57:25.842304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.229 [2024-12-16 02:57:25.842325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.229 [2024-12-16 02:57:25.842332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.229 [2024-12-16 02:57:25.847505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.229 [2024-12-16 02:57:25.847527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.229 [2024-12-16 02:57:25.847535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.229 [2024-12-16 02:57:25.852514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.229 [2024-12-16 02:57:25.852535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.229 [2024-12-16 02:57:25.852547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.229 [2024-12-16 02:57:25.857675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.229 [2024-12-16 02:57:25.857696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.229 [2024-12-16 02:57:25.857704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.229 [2024-12-16 02:57:25.862907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.229 [2024-12-16 02:57:25.862929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.229 [2024-12-16 02:57:25.862937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.229 [2024-12-16 02:57:25.868098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.229 [2024-12-16 02:57:25.868120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.229 [2024-12-16 02:57:25.868128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.229 [2024-12-16 02:57:25.873162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.229 [2024-12-16 02:57:25.873183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.229 [2024-12-16 02:57:25.873191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.229 [2024-12-16 02:57:25.878303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.229 [2024-12-16 02:57:25.878323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.229 [2024-12-16 02:57:25.878331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.229 [2024-12-16 02:57:25.883456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.229 [2024-12-16 02:57:25.883478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.229 [2024-12-16 02:57:25.883486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:55.488 [2024-12-16 02:57:25.888694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.488 [2024-12-16 02:57:25.888715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.488 [2024-12-16 02:57:25.888723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:55.488 [2024-12-16 02:57:25.893958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.488 [2024-12-16 02:57:25.893988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.488 [2024-12-16 02:57:25.893996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:55.488 [2024-12-16 02:57:25.899128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e91130) 00:35:55.488 [2024-12-16 02:57:25.899152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:55.488 [2024-12-16 02:57:25.899160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:55.488 4929.00 IOPS, 616.12 MiB/s 00:35:55.488 Latency(us) 00:35:55.488 [2024-12-16T01:57:26.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.488 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:55.488 nvme0n1 : 2.00 4930.60 616.33 0.00 0.00 3242.39 678.77 14230.67 00:35:55.488 [2024-12-16T01:57:26.147Z] =================================================================================================================== 00:35:55.488 [2024-12-16T01:57:26.147Z] Total : 4930.60 616.33 0.00 0.00 3242.39 678.77 14230.67 00:35:55.488 { 00:35:55.488 "results": [ 00:35:55.488 { 00:35:55.488 "job": "nvme0n1", 00:35:55.488 "core_mask": "0x2", 00:35:55.488 "workload": "randread", 00:35:55.488 "status": "finished", 00:35:55.488 "queue_depth": 16, 00:35:55.488 "io_size": 131072, 00:35:55.488 "runtime": 2.003, 00:35:55.488 "iops": 4930.604093859211, 00:35:55.488 "mibps": 616.3255117324014, 00:35:55.488 "io_failed": 0, 00:35:55.488 "io_timeout": 0, 00:35:55.488 "avg_latency_us": 3242.392738914926, 00:35:55.488 "min_latency_us": 678.7657142857142, 00:35:55.488 "max_latency_us": 14230.674285714285 00:35:55.488 } 00:35:55.488 ], 00:35:55.488 "core_count": 1 00:35:55.488 } 00:35:55.488 02:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:55.488 02:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:55.488 02:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:55.488 | .driver_specific 00:35:55.488 | .nvme_error 00:35:55.488 | .status_code 00:35:55.488 | .command_transient_transport_error' 00:35:55.488 02:57:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:55.488 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 319 > 0 )) 00:35:55.488 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1198561 00:35:55.488 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1198561 ']' 00:35:55.488 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1198561 00:35:55.488 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:55.488 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:55.488 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1198561 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1198561' 00:35:55.748 killing process with pid 1198561 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1198561 00:35:55.748 Received shutdown signal, test time was about 2.000000 seconds 00:35:55.748 00:35:55.748 Latency(us) 00:35:55.748 [2024-12-16T01:57:26.407Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.748 [2024-12-16T01:57:26.407Z] =================================================================================================================== 00:35:55.748 [2024-12-16T01:57:26.407Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1198561 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1199176 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1199176 /var/tmp/bperf.sock 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1199176 ']' 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:55.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:55.748 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:55.748 [2024-12-16 02:57:26.397614] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:55.748 [2024-12-16 02:57:26.397661] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199176 ] 00:35:56.007 [2024-12-16 02:57:26.471987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.007 [2024-12-16 02:57:26.494589] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:56.007 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:56.007 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:56.007 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:56.007 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:56.266 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:56.266 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.266 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:56.266 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.266 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:56.266 02:57:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:56.833 nvme0n1 00:35:56.833 02:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:56.833 02:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.833 02:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:56.833 02:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.833 02:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:56.833 02:57:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:56.833 Running I/O for 2 seconds... 00:35:56.833 [2024-12-16 02:57:27.319949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef1868 00:35:56.833 [2024-12-16 02:57:27.320971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:11606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.833 [2024-12-16 02:57:27.320997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:56.833 [2024-12-16 02:57:27.329216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef8a50 00:35:56.833 [2024-12-16 02:57:27.330209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.833 [2024-12-16 02:57:27.330231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:56.833 [2024-12-16 02:57:27.338813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef5378 00:35:56.833 [2024-12-16 02:57:27.339733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.833 [2024-12-16 02:57:27.339754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:56.833 [2024-12-16 02:57:27.349221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee1f80 00:35:56.833 [2024-12-16 02:57:27.350743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.350763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.355646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efbcf0 00:35:56.834 [2024-12-16 02:57:27.356220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:15136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.356240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.365188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee3498 00:35:56.834 [2024-12-16 02:57:27.366086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.366106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.374355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eff3c8 00:35:56.834 [2024-12-16 02:57:27.374815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.374835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.385752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efb8b8 00:35:56.834 [2024-12-16 02:57:27.387294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.387313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.392294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee01f8 00:35:56.834 [2024-12-16 02:57:27.393109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.393128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.401675] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef6cc8 00:35:56.834 [2024-12-16 02:57:27.402597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.402616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.410861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef57b0 00:35:56.834 [2024-12-16 02:57:27.411802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.411823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.420519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eeaab8 00:35:56.834 [2024-12-16 02:57:27.421365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:24101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.421385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.430194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efef90 00:35:56.834 [2024-12-16 02:57:27.431413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.431432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.438783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee1f80 00:35:56.834 [2024-12-16 02:57:27.439704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.439723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.448121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef1ca0 00:35:56.834 [2024-12-16 02:57:27.449105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.449123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.459267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef3a28 00:35:56.834 [2024-12-16 02:57:27.460631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.460651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.468463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee5a90 00:35:56.834 [2024-12-16 02:57:27.469823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:22697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.469843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.477192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eddc00 00:35:56.834 [2024-12-16 02:57:27.478540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:20191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.478559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:56.834 [2024-12-16 02:57:27.486391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eebfd0 00:35:56.834 [2024-12-16 02:57:27.487761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:56.834 [2024-12-16 02:57:27.487781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.492706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee01f8 00:35:57.093 [2024-12-16 02:57:27.493356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.493376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.503834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eeee38 00:35:57.093 [2024-12-16 02:57:27.505017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.505037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.514049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eee5c8 00:35:57.093 [2024-12-16 02:57:27.515716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.515736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.520948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eea248 00:35:57.093 [2024-12-16 02:57:27.521803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.521822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.532053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efd208 00:35:57.093 [2024-12-16 02:57:27.533406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:8568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.533425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.538554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee7c50 00:35:57.093 [2024-12-16 02:57:27.539164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.539186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.547981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eee5c8 00:35:57.093 [2024-12-16 02:57:27.548745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.548764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.558950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eec840 00:35:57.093 [2024-12-16 02:57:27.560206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.560224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.567191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef8a50 00:35:57.093 [2024-12-16 02:57:27.568419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.568438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.576402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee2c28 00:35:57.093 [2024-12-16 02:57:27.577164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.577185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.587155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee49b0 00:35:57.093 [2024-12-16 02:57:27.588643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.588662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.593551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eee190 00:35:57.093 [2024-12-16 02:57:27.594237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.594256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.604937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee3060 00:35:57.093 [2024-12-16 02:57:27.606332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.606350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.611475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efd208 00:35:57.093 [2024-12-16 02:57:27.612142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:16684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.612161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.622079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef9b30 00:35:57.093 [2024-12-16 02:57:27.622901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.622925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.630363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee0ea0 00:35:57.093 [2024-12-16 02:57:27.631239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:15558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.631258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.641420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eeb328 00:35:57.093 [2024-12-16 02:57:27.642815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.642835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.647971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef2d80 00:35:57.093 [2024-12-16 02:57:27.648636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.648655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:35:57.093 [2024-12-16 02:57:27.656975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef6458 00:35:57.093 [2024-12-16 02:57:27.657647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.093 [2024-12-16 02:57:27.657666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:57.094 [2024-12-16 02:57:27.666492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef81e0 00:35:57.094 [2024-12-16 02:57:27.667068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.094 [2024-12-16 02:57:27.667088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:57.094 [2024-12-16 02:57:27.675683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef6458 00:35:57.094 [2024-12-16 02:57:27.676386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.094 [2024-12-16 02:57:27.676405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:57.094 [2024-12-16 02:57:27.684683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee5ec8 00:35:57.094 [2024-12-16 02:57:27.685393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.094 [2024-12-16 02:57:27.685412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:57.094 [2024-12-16 02:57:27.693053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef5be8 00:35:57.094 [2024-12-16 02:57:27.693842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.094 [2024-12-16 02:57:27.693864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:57.094 [2024-12-16 02:57:27.704011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eeaab8 00:35:57.094 [2024-12-16 02:57:27.705165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.094 [2024-12-16 02:57:27.705186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:57.094 [2024-12-16 02:57:27.712943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eea680 00:35:57.094 [2024-12-16 02:57:27.714119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.094 [2024-12-16 02:57:27.714138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:57.094 [2024-12-16 02:57:27.720951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef1868 00:35:57.094 [2024-12-16 02:57:27.721675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.094 [2024-12-16 02:57:27.721696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:57.094 [2024-12-16 02:57:27.730009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ede8a8 00:35:57.094 [2024-12-16 02:57:27.730573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.094 [2024-12-16 02:57:27.730592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:57.094 [2024-12-16 02:57:27.738762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef8618 00:35:57.094 [2024-12-16 02:57:27.739564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.094 [2024-12-16 02:57:27.739583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:57.094 [2024-12-16 02:57:27.747852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee0ea0 00:35:57.094 [2024-12-16 02:57:27.748690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.094 [2024-12-16 02:57:27.748709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:57.352 [2024-12-16 02:57:27.758743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee0ea0 00:35:57.352 [2024-12-16 02:57:27.760133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.352 [2024-12-16 02:57:27.760152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:57.352 [2024-12-16 02:57:27.767911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef8e88 00:35:57.352 [2024-12-16 02:57:27.769278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.352 [2024-12-16 02:57:27.769296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.775446] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef92c0 00:35:57.353 [2024-12-16 02:57:27.776436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.776456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.784284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef4298 00:35:57.353 [2024-12-16 02:57:27.785257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:24445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.785275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.794513] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efe2e8 00:35:57.353 [2024-12-16 02:57:27.795794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.795813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.802213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef2510 00:35:57.353 [2024-12-16 02:57:27.803024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.803043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.811191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef7970 00:35:57.353 [2024-12-16 02:57:27.812083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.812103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.819502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efdeb0 00:35:57.353 [2024-12-16 02:57:27.820381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.820400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.830461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016edfdc0 00:35:57.353 [2024-12-16 02:57:27.831794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.831813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.837111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee3d08 00:35:57.353 [2024-12-16 02:57:27.837682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:17172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.837700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.848879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eeff18 00:35:57.353 [2024-12-16 02:57:27.850183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.850203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.858101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee88f8 00:35:57.353 [2024-12-16 02:57:27.859468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.859490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.865627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016edf988 00:35:57.353 [2024-12-16 02:57:27.866234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.866253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.874864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016edf550 00:35:57.353 [2024-12-16 02:57:27.875549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.875569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.883292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef8e88 00:35:57.353 [2024-12-16 02:57:27.884547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.884565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.892744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eddc00 00:35:57.353 [2024-12-16 02:57:27.893885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.893904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.901740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee8088 00:35:57.353 [2024-12-16 02:57:27.902433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.902452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.910208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016edf118 00:35:57.353 [2024-12-16 02:57:27.910810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.910829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.918348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eeb328 00:35:57.353 [2024-12-16 02:57:27.919126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.919145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.927370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eec408 00:35:57.353 [2024-12-16 02:57:27.928166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.928185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.936150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eeea00 00:35:57.353 [2024-12-16 02:57:27.936936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:2055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.936955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.947094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef8a50 00:35:57.353 [2024-12-16 02:57:27.948292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.948312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.954455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eeee38 00:35:57.353 [2024-12-16 02:57:27.955036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.955055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.963656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef7100 00:35:57.353 [2024-12-16 02:57:27.964121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.964140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.974950] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee8d30 00:35:57.353 [2024-12-16 02:57:27.976433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.976453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.981268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef6890 00:35:57.353 [2024-12-16 02:57:27.981998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:18566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.982016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:27.991503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef46d0 00:35:57.353 [2024-12-16 02:57:27.992639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:27.992658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:28.000539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee95a0 00:35:57.353 [2024-12-16 02:57:28.001668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:28.001687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:57.353 [2024-12-16 02:57:28.009620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee9e10 00:35:57.353 [2024-12-16 02:57:28.010340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.353 [2024-12-16 02:57:28.010360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.018240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eec408 00:35:57.613 [2024-12-16 02:57:28.018840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.018865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.028549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eebb98 00:35:57.613 [2024-12-16 02:57:28.029816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.029835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.036132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef0788 00:35:57.613 [2024-12-16 02:57:28.036814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.036833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.047207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef81e0 00:35:57.613 [2024-12-16 02:57:28.048731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.048750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.053695] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eed0b0 00:35:57.613 [2024-12-16 02:57:28.054542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:5731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.054561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.065075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efdeb0 00:35:57.613 [2024-12-16 02:57:28.066402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.066421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.074117] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee84c0 00:35:57.613 [2024-12-16 02:57:28.075411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:16860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.075430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.082189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eeaab8 00:35:57.613 [2024-12-16 02:57:28.083540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.083559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.090058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efc560 00:35:57.613 [2024-12-16 02:57:28.090769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.090790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.101080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eef6a8 00:35:57.613 [2024-12-16 02:57:28.102174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.102194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.111435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eed0b0 00:35:57.613 [2024-12-16 02:57:28.112976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.112995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.117733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eec408 00:35:57.613 [2024-12-16 02:57:28.118431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.118450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.126228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef2d80 00:35:57.613 [2024-12-16 02:57:28.126937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.126957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.137247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efa3a0 00:35:57.613 [2024-12-16 02:57:28.138444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.138463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.145622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef6458 00:35:57.613 [2024-12-16 02:57:28.146634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.146652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.154610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef31b8 00:35:57.613 [2024-12-16 02:57:28.155561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:24920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.155580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.163250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efb8b8 00:35:57.613 [2024-12-16 02:57:28.164143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.164162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.172154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eddc00 00:35:57.613 [2024-12-16 02:57:28.172972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:18309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.172992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.181447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee4578 00:35:57.613 [2024-12-16 02:57:28.182036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.182055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.190164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eecc78 00:35:57.613 [2024-12-16 02:57:28.191049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.191069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.199204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efa3a0 00:35:57.613 [2024-12-16 02:57:28.200138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.200156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.208279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016edf118 00:35:57.613 [2024-12-16 02:57:28.208750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.208770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:57.613 [2024-12-16 02:57:28.217591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee7818 00:35:57.613 [2024-12-16 02:57:28.218199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.613 [2024-12-16 02:57:28.218218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:57.614 [2024-12-16 02:57:28.226370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee49b0 00:35:57.614 [2024-12-16 02:57:28.227276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.614 [2024-12-16 02:57:28.227296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:57.614 [2024-12-16 02:57:28.235346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef35f0 00:35:57.614 [2024-12-16 02:57:28.236185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.614 [2024-12-16 02:57:28.236204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:57.614 [2024-12-16 02:57:28.244794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efb480 00:35:57.614 [2024-12-16 02:57:28.245893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.614 [2024-12-16 02:57:28.245912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:57.614 [2024-12-16 02:57:28.255873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efc998 00:35:57.614 [2024-12-16 02:57:28.257435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:23729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.614 [2024-12-16 02:57:28.257454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:57.614 [2024-12-16 02:57:28.262337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef2510 00:35:57.614 [2024-12-16 02:57:28.263193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.614 [2024-12-16 02:57:28.263211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:57.873 [2024-12-16 02:57:28.273582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef7538 00:35:57.873 [2024-12-16 02:57:28.274965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:25364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.873 [2024-12-16 02:57:28.274985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:57.873 [2024-12-16 02:57:28.280256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef1868 00:35:57.873 [2024-12-16 02:57:28.280874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.873 [2024-12-16 02:57:28.280893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:57.873 [2024-12-16 02:57:28.291104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eedd58 00:35:57.873 [2024-12-16 02:57:28.292157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.873 [2024-12-16 02:57:28.292176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:57.873 [2024-12-16 02:57:28.299592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ede8a8 00:35:57.873 [2024-12-16 02:57:28.300581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.873 [2024-12-16 02:57:28.300599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.873 [2024-12-16 02:57:28.308947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef35f0 00:35:57.873 [2024-12-16 02:57:28.310053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.873 [2024-12-16 02:57:28.310072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:57.873 27987.00 IOPS, 109.32 MiB/s [2024-12-16T01:57:28.532Z] [2024-12-16 02:57:28.319415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef9b30 00:35:57.873 [2024-12-16 02:57:28.319962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.873 [2024-12-16 02:57:28.319981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:57.873 [2024-12-16 02:57:28.329536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efd208 00:35:57.873 [2024-12-16 02:57:28.330781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.873 [2024-12-16 02:57:28.330803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.873 [2024-12-16 02:57:28.338109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef0788 00:35:57.873 [2024-12-16 02:57:28.339365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.873 [2024-12-16 02:57:28.339384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:57.873 [2024-12-16 02:57:28.347716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef0bc0 00:35:57.873 [2024-12-16 02:57:28.349120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.873 [2024-12-16 02:57:28.349139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:57.873 [2024-12-16 02:57:28.354518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee5a90 00:35:57.873 [2024-12-16 02:57:28.355182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.873 [2024-12-16 02:57:28.355201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:57.873 [2024-12-16 02:57:28.365660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef46d0 00:35:57.873 [2024-12-16 02:57:28.366804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.873 [2024-12-16 02:57:28.366824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:57.873 [2024-12-16 02:57:28.374759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eed0b0 00:35:57.873 [2024-12-16 02:57:28.375449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:3526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.873 [2024-12-16 02:57:28.375470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:57.873 [2024-12-16 02:57:28.382876] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee1f80 00:35:57.873 [2024-12-16 02:57:28.383754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.873 [2024-12-16 02:57:28.383773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:57.873 [2024-12-16 02:57:28.393928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef0788 00:35:57.873 [2024-12-16 02:57:28.395399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.395419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.400260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efcdd0 00:35:57.874 [2024-12-16 02:57:28.400814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.400834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.409606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ede038 00:35:57.874 [2024-12-16 02:57:28.410400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.410419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.419072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016edf988 00:35:57.874 [2024-12-16 02:57:28.420075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.420095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.427382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eef270 00:35:57.874 [2024-12-16 02:57:28.427939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.427958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.436703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef5378 00:35:57.874 [2024-12-16 02:57:28.437585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.437604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.445791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eefae0 00:35:57.874 [2024-12-16 02:57:28.446675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.446694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.455340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efef90 00:35:57.874 [2024-12-16 02:57:28.456128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.456147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.464493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee6fa8 00:35:57.874 [2024-12-16 02:57:28.465504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.465524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.473122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef57b0 00:35:57.874 [2024-12-16 02:57:28.474087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.474106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.482283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016edece0 00:35:57.874 [2024-12-16 02:57:28.483285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.483304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.491670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eebb98 00:35:57.874 [2024-12-16 02:57:28.492791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.492819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.500052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee7818 00:35:57.874 [2024-12-16 02:57:28.500732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.500750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.508249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efb480 00:35:57.874 [2024-12-16 02:57:28.509010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.509029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.517441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef4b08 00:35:57.874 [2024-12-16 02:57:28.518200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:24426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.518220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:57.874 [2024-12-16 02:57:28.528200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee1f80 00:35:57.874 [2024-12-16 02:57:28.529394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:57.874 [2024-12-16 02:57:28.529414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:58.133 [2024-12-16 02:57:28.537762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eee5c8 00:35:58.133 [2024-12-16 02:57:28.539125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.133 [2024-12-16 02:57:28.539144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:58.133 [2024-12-16 02:57:28.547196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee3498 00:35:58.133 [2024-12-16 02:57:28.548704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:8309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.133 [2024-12-16 02:57:28.548722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:58.133 [2024-12-16 02:57:28.553706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efe720 00:35:58.133 [2024-12-16 02:57:28.554478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.133 [2024-12-16 02:57:28.554498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:35:58.133 [2024-12-16 02:57:28.564509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef4b08 00:35:58.133 [2024-12-16 02:57:28.565720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.133 [2024-12-16 02:57:28.565742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:58.133 [2024-12-16 02:57:28.571792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee38d0 00:35:58.133 [2024-12-16 02:57:28.572368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:15831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.133 [2024-12-16 02:57:28.572387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:58.133 [2024-12-16 02:57:28.580886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee38d0 00:35:58.133 [2024-12-16 02:57:28.581459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.133 [2024-12-16 02:57:28.581479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:58.133 [2024-12-16 02:57:28.590136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef9f68 00:35:58.133 [2024-12-16 02:57:28.590616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.133 [2024-12-16 02:57:28.590636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:58.133 [2024-12-16 02:57:28.600700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee0a68 00:35:58.133 [2024-12-16 02:57:28.601970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.133 [2024-12-16 02:57:28.601990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:58.133 [2024-12-16 02:57:28.609005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef20d8 00:35:58.133 [2024-12-16 02:57:28.610341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.133 [2024-12-16 02:57:28.610360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:58.133 [2024-12-16 02:57:28.617377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efa7d8 00:35:58.133 [2024-12-16 02:57:28.618093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.133 [2024-12-16 02:57:28.618126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:58.133 [2024-12-16 02:57:28.626696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef7538 00:35:58.133 [2024-12-16 02:57:28.627493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.133 [2024-12-16 02:57:28.627511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:58.133 [2024-12-16 02:57:28.635115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee3498 00:35:58.133 [2024-12-16 02:57:28.635868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.133 [2024-12-16 02:57:28.635887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:58.133 [2024-12-16 02:57:28.645066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee88f8 00:35:58.133 [2024-12-16 02:57:28.645966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.133 [2024-12-16 02:57:28.645987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.653999] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016edece0 00:35:58.134 [2024-12-16 02:57:28.654906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:25357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.654925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.662886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef6458 00:35:58.134 [2024-12-16 02:57:28.663795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.663814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.672052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eeea00 00:35:58.134 [2024-12-16 02:57:28.672743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.672762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.681138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee3d08 00:35:58.134 [2024-12-16 02:57:28.682148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.682167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.690337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee8088 00:35:58.134 [2024-12-16 02:57:28.691473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:23529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.691491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.698839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef6020 00:35:58.134 [2024-12-16 02:57:28.699942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.699961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.707142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee5a90 00:35:58.134 [2024-12-16 02:57:28.707914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.707934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.715907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee5220 00:35:58.134 [2024-12-16 02:57:28.716690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.716709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.724866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef8a50 00:35:58.134 [2024-12-16 02:57:28.725656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.725675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.734043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eeff18 00:35:58.134 [2024-12-16 02:57:28.734605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.734625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.744296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efc998 00:35:58.134 [2024-12-16 02:57:28.745641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.745660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.752624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eee5c8 00:35:58.134 [2024-12-16 02:57:28.753654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:20461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.753673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.761344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ede470 00:35:58.134 [2024-12-16 02:57:28.762382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.762400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.770299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef1ca0 00:35:58.134 [2024-12-16 02:57:28.771309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18683 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.771328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.779506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eedd58 00:35:58.134 [2024-12-16 02:57:28.780605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.780624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.134 [2024-12-16 02:57:28.788032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee0ea0 00:35:58.134 [2024-12-16 02:57:28.789151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.134 [2024-12-16 02:57:28.789170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.796649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee12d8 00:35:58.394 [2024-12-16 02:57:28.797437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.797456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.805509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef0788 00:35:58.394 [2024-12-16 02:57:28.806302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.806321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.814428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eebfd0 00:35:58.394 [2024-12-16 02:57:28.815203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.815221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.824543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee3d08 00:35:58.394 [2024-12-16 02:57:28.825771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.825789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.832811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef31b8 00:35:58.394 [2024-12-16 02:57:28.833692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.833711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.841697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee01f8 00:35:58.394 [2024-12-16 02:57:28.842656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.842674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.852089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef0ff8 00:35:58.394 [2024-12-16 02:57:28.853456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.853475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.861548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef8618 00:35:58.394 [2024-12-16 02:57:28.863007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.863026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.867967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef6cc8 00:35:58.394 [2024-12-16 02:57:28.868646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.868665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.876872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efa7d8 00:35:58.394 [2024-12-16 02:57:28.877538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.877560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.886184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eddc00 00:35:58.394 [2024-12-16 02:57:28.886978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.886996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.895304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee27f0 00:35:58.394 [2024-12-16 02:57:28.896171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.896191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.904399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eefae0 00:35:58.394 [2024-12-16 02:57:28.905200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.905219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.913939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ede8a8 00:35:58.394 [2024-12-16 02:57:28.914884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.914904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.923525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee0a68 00:35:58.394 [2024-12-16 02:57:28.924577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16292 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.924596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.931896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef8618 00:35:58.394 [2024-12-16 02:57:28.932687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.394 [2024-12-16 02:57:28.932705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:35:58.394 [2024-12-16 02:57:28.940482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efa7d8 00:35:58.394 [2024-12-16 02:57:28.941070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.395 [2024-12-16 02:57:28.941090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:35:58.395 [2024-12-16 02:57:28.949844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee4140 00:35:58.395 [2024-12-16 02:57:28.950661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.395 [2024-12-16 02:57:28.950680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:35:58.395 [2024-12-16 02:57:28.959166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef4f40 00:35:58.395 [2024-12-16 02:57:28.960121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.395 [2024-12-16 02:57:28.960140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:58.395 [2024-12-16 02:57:28.970178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee88f8 00:35:58.395 [2024-12-16 02:57:28.971626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.395 [2024-12-16 02:57:28.971645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:58.395 [2024-12-16 02:57:28.976620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef2510 00:35:58.395 [2024-12-16 02:57:28.977351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.395 [2024-12-16 02:57:28.977370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:58.395 [2024-12-16 02:57:28.987153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee4578 00:35:58.395 [2024-12-16 02:57:28.988128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.395 [2024-12-16 02:57:28.988147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:58.395 [2024-12-16 02:57:28.996216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef6020 00:35:58.395 [2024-12-16 02:57:28.997195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.395 [2024-12-16 02:57:28.997214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:58.395 [2024-12-16 02:57:29.004571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef5be8 00:35:58.395 [2024-12-16 02:57:29.005505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.395 [2024-12-16 02:57:29.005526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:58.395 [2024-12-16 02:57:29.013571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eecc78 00:35:58.395 [2024-12-16 02:57:29.014558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:16541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.395 [2024-12-16 02:57:29.014578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:58.395 [2024-12-16 02:57:29.022603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efb480 00:35:58.395 [2024-12-16 02:57:29.023227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.395 [2024-12-16 02:57:29.023247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:58.395 [2024-12-16 02:57:29.031655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eedd58 00:35:58.395 [2024-12-16 02:57:29.032647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.395 [2024-12-16 02:57:29.032665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:58.395 [2024-12-16 02:57:29.042679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef2d80 00:35:58.395 [2024-12-16 02:57:29.044166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:13240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.395 [2024-12-16 02:57:29.044185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:58.395 [2024-12-16 02:57:29.049195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee8088 00:35:58.395 [2024-12-16 02:57:29.049990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.395 [2024-12-16 02:57:29.050009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.060408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef6cc8 00:35:58.655 [2024-12-16 02:57:29.061703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.061722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.066915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee4de8 00:35:58.655 [2024-12-16 02:57:29.067451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.067469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.078472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee5a90 00:35:58.655 [2024-12-16 02:57:29.079749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.079769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.084983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef3a28 00:35:58.655 [2024-12-16 02:57:29.085542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:18472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.085560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.096793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee38d0 00:35:58.655 [2024-12-16 02:57:29.098104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.098125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.106376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eedd58 00:35:58.655 [2024-12-16 02:57:29.107806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.107826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.112971] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef3a28 00:35:58.655 [2024-12-16 02:57:29.113678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:20904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.113703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.123663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee1b48 00:35:58.655 [2024-12-16 02:57:29.124639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.124659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.133077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef57b0 00:35:58.655 [2024-12-16 02:57:29.134255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.134274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.141494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee84c0 00:35:58.655 [2024-12-16 02:57:29.142393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.142413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.150643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef4b08 00:35:58.655 [2024-12-16 02:57:29.151603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.151623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.159489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee6b70 00:35:58.655 [2024-12-16 02:57:29.160219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.160238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.170415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016efeb58 00:35:58.655 [2024-12-16 02:57:29.171868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:3476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.171887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.176806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef0bc0 00:35:58.655 [2024-12-16 02:57:29.177528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.177548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.186400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee6b70 00:35:58.655 [2024-12-16 02:57:29.187117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.187137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.197336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee3498 00:35:58.655 [2024-12-16 02:57:29.198810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.198830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.203676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef4b08 00:35:58.655 [2024-12-16 02:57:29.204280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.204299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.213123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee88f8 00:35:58.655 [2024-12-16 02:57:29.213978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.213998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.221942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eee5c8 00:35:58.655 [2024-12-16 02:57:29.222548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.222568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.231142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef0bc0 00:35:58.655 [2024-12-16 02:57:29.232003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.232022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.242006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef8618 00:35:58.655 [2024-12-16 02:57:29.243358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.243379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.248401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016edf550 00:35:58.655 [2024-12-16 02:57:29.249036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.249057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.259972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee7c50 00:35:58.655 [2024-12-16 02:57:29.261330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:24552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.655 [2024-12-16 02:57:29.261349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:58.655 [2024-12-16 02:57:29.266357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee6738 00:35:58.655 [2024-12-16 02:57:29.267008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.656 [2024-12-16 02:57:29.267028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:58.656 [2024-12-16 02:57:29.277927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ede8a8 00:35:58.656 [2024-12-16 02:57:29.279323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:9425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.656 [2024-12-16 02:57:29.279342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:58.656 [2024-12-16 02:57:29.285131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee0630 00:35:58.656 [2024-12-16 02:57:29.286022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:20951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.656 [2024-12-16 02:57:29.286041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:58.656 [2024-12-16 02:57:29.296093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016eeaef0 00:35:58.656 [2024-12-16 02:57:29.297454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.656 [2024-12-16 02:57:29.297473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:58.656 [2024-12-16 02:57:29.303201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ee7c50 00:35:58.656 [2024-12-16 02:57:29.304069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.656 [2024-12-16 02:57:29.304088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:58.656 [2024-12-16 02:57:29.313069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7dddc0) with pdu=0x200016ef20d8 00:35:58.915 [2024-12-16 02:57:29.314754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:58.915 [2024-12-16 02:57:29.314773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:58.915 28159.00 IOPS, 110.00 MiB/s 00:35:58.915 Latency(us) 00:35:58.915 [2024-12-16T01:57:29.574Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.915 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:58.915 nvme0n1 : 2.01 28164.32 110.02 0.00 0.00 4538.79 2044.10 12607.88 00:35:58.915 [2024-12-16T01:57:29.574Z] =================================================================================================================== 00:35:58.915 [2024-12-16T01:57:29.574Z] Total : 28164.32 110.02 0.00 0.00 4538.79 2044.10 12607.88 00:35:58.915 { 00:35:58.915 "results": [ 00:35:58.915 { 00:35:58.915 "job": "nvme0n1", 00:35:58.915 "core_mask": "0x2", 00:35:58.915 "workload": "randwrite", 00:35:58.915 "status": "finished", 00:35:58.915 "queue_depth": 128, 00:35:58.915 "io_size": 4096, 00:35:58.915 "runtime": 2.006404, 00:35:58.915 "iops": 28164.317854230754, 00:35:58.915 "mibps": 110.01686661808888, 00:35:58.915 "io_failed": 0, 00:35:58.915 "io_timeout": 0, 00:35:58.915 "avg_latency_us": 4538.78850401411, 00:35:58.915 "min_latency_us": 2044.0990476190477, 00:35:58.915 "max_latency_us": 12607.878095238095 00:35:58.915 } 00:35:58.915 ], 00:35:58.915 "core_count": 1 00:35:58.915 } 00:35:58.915 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:58.915 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:58.915 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:58.915 | .driver_specific 00:35:58.915 | .nvme_error 00:35:58.915 | .status_code 00:35:58.915 | .command_transient_transport_error' 00:35:58.915 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:58.915 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:35:58.915 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1199176 00:35:58.915 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1199176 ']' 00:35:58.915 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1199176 00:35:58.915 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:58.915 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:58.915 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1199176 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1199176' 00:35:59.174 killing process with pid 1199176 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1199176 00:35:59.174 Received shutdown signal, test time was about 2.000000 seconds 00:35:59.174 00:35:59.174 Latency(us) 00:35:59.174 [2024-12-16T01:57:29.833Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.174 [2024-12-16T01:57:29.833Z] =================================================================================================================== 00:35:59.174 [2024-12-16T01:57:29.833Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1199176 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1199693 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1199693 /var/tmp/bperf.sock 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1199693 ']' 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:59.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:59.174 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:59.174 [2024-12-16 02:57:29.797342] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:59.174 [2024-12-16 02:57:29.797405] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199693 ] 00:35:59.174 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:59.174 Zero copy mechanism will not be used. 00:35:59.434 [2024-12-16 02:57:29.871940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.434 [2024-12-16 02:57:29.891421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:59.434 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:59.434 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:59.434 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:59.434 02:57:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:59.693 02:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:59.693 02:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.693 02:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:59.693 02:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.693 02:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:59.693 02:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:59.951 nvme0n1 00:35:59.951 02:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:59.951 02:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.951 02:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:59.951 02:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.951 02:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:59.951 02:57:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:00.212 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:00.212 Zero copy mechanism will not be used. 00:36:00.212 Running I/O for 2 seconds... 00:36:00.212 [2024-12-16 02:57:30.684288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.684395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.684425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.690592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.690672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.690695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.696146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.696302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.696331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.702627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.702776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.702796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.709093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.709251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.709272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.715559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.715721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.715741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.721861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.722015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.722036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.728266] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.728416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.728436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.734742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.734905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.734925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.741596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.741718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.741736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.748621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.748719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.748739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.754120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.754198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.754217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.758957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.759012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.759030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.764406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.764459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.764478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.769419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.769508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.769527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.774658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.774763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.774782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.779936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.779995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.780014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.784977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.785078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.785097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.790110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.790175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.790194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.795423] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.795481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.795499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.800509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.800641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.800660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.805921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.805974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.805992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.810757] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.810824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.810843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.815812] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.815868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.815886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.820540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.820630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.820649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.825804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.825860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.212 [2024-12-16 02:57:30.825878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.212 [2024-12-16 02:57:30.830924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.212 [2024-12-16 02:57:30.830978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.213 [2024-12-16 02:57:30.830997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.213 [2024-12-16 02:57:30.837180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.213 [2024-12-16 02:57:30.837246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.213 [2024-12-16 02:57:30.837265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.213 [2024-12-16 02:57:30.843637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.213 [2024-12-16 02:57:30.843781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.213 [2024-12-16 02:57:30.843808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.213 [2024-12-16 02:57:30.850793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.213 [2024-12-16 02:57:30.850862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.213 [2024-12-16 02:57:30.850881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.213 [2024-12-16 02:57:30.857640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.213 [2024-12-16 02:57:30.857839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.213 [2024-12-16 02:57:30.857865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.213 [2024-12-16 02:57:30.863438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.213 [2024-12-16 02:57:30.863549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.213 [2024-12-16 02:57:30.863567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.213 [2024-12-16 02:57:30.868221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.213 [2024-12-16 02:57:30.868277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.213 [2024-12-16 02:57:30.868295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.473 [2024-12-16 02:57:30.872820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.473 [2024-12-16 02:57:30.872884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.473 [2024-12-16 02:57:30.872903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.473 [2024-12-16 02:57:30.877575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.473 [2024-12-16 02:57:30.877674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.473 [2024-12-16 02:57:30.877692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.473 [2024-12-16 02:57:30.883259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.473 [2024-12-16 02:57:30.883423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.473 [2024-12-16 02:57:30.883443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.473 [2024-12-16 02:57:30.889397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.473 [2024-12-16 02:57:30.889485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.473 [2024-12-16 02:57:30.889504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.473 [2024-12-16 02:57:30.894612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.894779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.894798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.899669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.899764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.899783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.904579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.904675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.904694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.909699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.909805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.909824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.914659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.914746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.914765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.919613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.919775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.919795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.924625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.924726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.924744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.929595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.929695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.929714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.934468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.934572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.934591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.939542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.939624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.939642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.945273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.945434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.945455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.951256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.951337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.951355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.957216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.957278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.957296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.962750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.962824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.962842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.967996] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.968065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.968084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.973192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.973246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.973264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.978468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.978544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.978562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.983586] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.983647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.983670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.988662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.988719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.988737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.993918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.993975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.994001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:30.998844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:30.998908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:30.998926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:31.003475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:31.003539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:31.003557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:31.008254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:31.008340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:31.008359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:31.013420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:31.013482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:31.013501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:31.018638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:31.018692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:31.018710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:31.023509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:31.023576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:31.023594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:31.028216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:31.028275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:31.028294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:31.033280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.474 [2024-12-16 02:57:31.033372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.474 [2024-12-16 02:57:31.033391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.474 [2024-12-16 02:57:31.038413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.038545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.038564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.043424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.043484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.043502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.048852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.049015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.049035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.054102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.054192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.054210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.060036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.060117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.060135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.065534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.065600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.065618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.070570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.070654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.070672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.075355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.075412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.075430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.080136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.080212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.080231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.084763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.084859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.084877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.089494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.089545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.089563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.094336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.094447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.094465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.098783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.098858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.098877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.103270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.103340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.103359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.107924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.108003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.108021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.112250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.112338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.112359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.116656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.116731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.116749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.121044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.121103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.121121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.125478] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.125550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.125568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.475 [2024-12-16 02:57:31.129901] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.475 [2024-12-16 02:57:31.129972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.475 [2024-12-16 02:57:31.129991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.735 [2024-12-16 02:57:31.134282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.134357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.134375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.139019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.139083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.139101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.143536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.143600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.143618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.148026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.148090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.148108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.152499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.152562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.152584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.156924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.156978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.156997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.161508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.161581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.161599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.165867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.165927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.165946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.170192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.170253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.170271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.174665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.174817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.174838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.180178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.180337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.180356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.186673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.186833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.186859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.192250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.192340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.192360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.197626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.197738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.197757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.202756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.202828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.202859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.207500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.207597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.207615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.213180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.213345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.213365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.219175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.219324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.219343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.224791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.224975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.224995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.231449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.231586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.231604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.237171] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.237243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.237262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.241667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.241732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.241750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.246132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.246203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.246221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.250603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.250664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.250682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.255056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.255123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.255142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.259503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.259556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.259574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.263927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.263978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.736 [2024-12-16 02:57:31.263997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.736 [2024-12-16 02:57:31.268295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.736 [2024-12-16 02:57:31.268355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.268374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.272663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.272728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.272747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.277012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.277086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.277104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.281296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.281363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.281385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.285758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.285819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.285837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.290081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.290142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.290161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.294398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.294451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.294471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.298715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.298768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.298786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.302972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.303037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.303056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.307290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.307357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.307375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.311583] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.311649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.311668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.316015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.316068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.316087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.320332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.320393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.320412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.324889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.324952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.324970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.329167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.329235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.329253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.333576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.333640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.333658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.337869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.337934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.337953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.342283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.342345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.342363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.346582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.346636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.346671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.350933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.350982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.351001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.355236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.355289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.355308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.359521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.359583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.359602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.363795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.363872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.363891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.368135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.368192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.368210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.372332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.372401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.372420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.376711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.376771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.376790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.380945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.381015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.381034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.385298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.737 [2024-12-16 02:57:31.385349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.737 [2024-12-16 02:57:31.385368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.737 [2024-12-16 02:57:31.389607] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.738 [2024-12-16 02:57:31.389672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.738 [2024-12-16 02:57:31.389690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.998 [2024-12-16 02:57:31.394086] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.998 [2024-12-16 02:57:31.394140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.998 [2024-12-16 02:57:31.394162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.998 [2024-12-16 02:57:31.398761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.998 [2024-12-16 02:57:31.398871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.998 [2024-12-16 02:57:31.398890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.998 [2024-12-16 02:57:31.403382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.998 [2024-12-16 02:57:31.403437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.998 [2024-12-16 02:57:31.403456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.998 [2024-12-16 02:57:31.408261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.998 [2024-12-16 02:57:31.408316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.998 [2024-12-16 02:57:31.408335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.998 [2024-12-16 02:57:31.413182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.998 [2024-12-16 02:57:31.413251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.998 [2024-12-16 02:57:31.413269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.998 [2024-12-16 02:57:31.417820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.998 [2024-12-16 02:57:31.417885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.998 [2024-12-16 02:57:31.417904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.998 [2024-12-16 02:57:31.422542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.998 [2024-12-16 02:57:31.422600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.998 [2024-12-16 02:57:31.422618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.998 [2024-12-16 02:57:31.427161] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.998 [2024-12-16 02:57:31.427229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.998 [2024-12-16 02:57:31.427247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.998 [2024-12-16 02:57:31.431728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.998 [2024-12-16 02:57:31.431810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.998 [2024-12-16 02:57:31.431829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.998 [2024-12-16 02:57:31.436501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.998 [2024-12-16 02:57:31.436561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.436579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.441231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.441309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.441328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.445838] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.445907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.445926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.450302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.450390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.450408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.455218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.455271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.455289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.460237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.460306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.460324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.465312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.465363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.465382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.470844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.470943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.470961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.477137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.477338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.477359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.483860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.483917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.483935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.490008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.490084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.490103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.496765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.496827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.496850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.503107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.503299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.503320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.510370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.510437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.510456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.517401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.517469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.517488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.523972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.524090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.524109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.530521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.530609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.530627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.535753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.535904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.535926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.541159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.541222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.541240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.546242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.546332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.546350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.551481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.551602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.551621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.556490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.556549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.556566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.561923] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.562000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.562018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.567084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.567173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.567192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.572522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.572578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.572597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.577909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.577968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.577986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.582954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.583125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.583146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.587954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.588008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.999 [2024-12-16 02:57:31.588026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:00.999 [2024-12-16 02:57:31.593003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:00.999 [2024-12-16 02:57:31.593092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.000 [2024-12-16 02:57:31.593110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.000 [2024-12-16 02:57:31.597602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.000 [2024-12-16 02:57:31.597669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.000 [2024-12-16 02:57:31.597687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.000 [2024-12-16 02:57:31.604024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.000 [2024-12-16 02:57:31.604076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.000 [2024-12-16 02:57:31.604095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.000 [2024-12-16 02:57:31.609905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.000 [2024-12-16 02:57:31.609989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.000 [2024-12-16 02:57:31.610007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.000 [2024-12-16 02:57:31.616343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.000 [2024-12-16 02:57:31.616518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.000 [2024-12-16 02:57:31.616539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.000 [2024-12-16 02:57:31.623253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.000 [2024-12-16 02:57:31.623330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.000 [2024-12-16 02:57:31.623348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.000 [2024-12-16 02:57:31.629879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.000 [2024-12-16 02:57:31.629971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.000 [2024-12-16 02:57:31.629989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.000 [2024-12-16 02:57:31.635495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.000 [2024-12-16 02:57:31.635604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.000 [2024-12-16 02:57:31.635623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.000 [2024-12-16 02:57:31.640737] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.000 [2024-12-16 02:57:31.640843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.000 [2024-12-16 02:57:31.640866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.000 [2024-12-16 02:57:31.646088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.000 [2024-12-16 02:57:31.646186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.000 [2024-12-16 02:57:31.646204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.000 [2024-12-16 02:57:31.651162] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.000 [2024-12-16 02:57:31.651258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.000 [2024-12-16 02:57:31.651276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.260 [2024-12-16 02:57:31.656263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.260 [2024-12-16 02:57:31.656358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.260 [2024-12-16 02:57:31.656376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.260 [2024-12-16 02:57:31.660915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.660978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.660997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.665271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.665344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.665363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.669778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.669852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.669871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.674136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.674209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.674231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.678692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.678749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.678766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.682946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.682998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.683016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.261 6057.00 IOPS, 757.12 MiB/s [2024-12-16T01:57:31.920Z] [2024-12-16 02:57:31.688190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.688269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.688289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.692786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.692858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.692877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.697106] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.697173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.697192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.701694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.701749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.701769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.706447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.706589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.706607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.711686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.711754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.711772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.716990] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.717049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.717068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.722683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.722739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.722759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.728324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.728407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.728426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.735148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.735313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.735331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.742408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.742493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.742512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.749277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.749445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.749466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.756947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.757090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.757109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.764013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.764239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.764261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.770653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.770924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.770945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.777429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.777761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.777782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.784180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.784526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.784547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.791364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.791654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.791674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.797946] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.798254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.798275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.804451] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.804765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.804786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.261 [2024-12-16 02:57:31.811612] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.261 [2024-12-16 02:57:31.811927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.261 [2024-12-16 02:57:31.811948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.818540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.818861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.818881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.825273] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.825873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.825893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.832468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.832776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.832801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.839182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.839442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.839463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.845884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.846129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.846150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.852458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.852710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.852731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.859663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.860013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.860035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.866393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.866537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.866556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.872945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.873178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.873199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.879034] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.879303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.879323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.885351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.885650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.885671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.892095] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.892400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.892421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.899173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.899415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.899436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.906014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.906352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.906373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.262 [2024-12-16 02:57:31.913181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.262 [2024-12-16 02:57:31.913454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.262 [2024-12-16 02:57:31.913475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.920346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.920636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.920656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.927028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.927340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.927361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.934631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.934902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.934923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.940786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.941028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.941049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.945226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.945466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.945487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.949437] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.949680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.949700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.953700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.953950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.953972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.957874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.958149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.958171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.962041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.962295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.962316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.966205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.966446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.966467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.970351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.970608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.970628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.974663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.974927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.974950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.979042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.979293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.979313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.984002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.984246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.984272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.989018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.989258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.989279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.993909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.994140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.994161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:31.998794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:31.999019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:31.999042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:32.003647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:32.003873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:32.003894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:32.010071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:32.010424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:32.010445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:32.017139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:32.017466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:32.017486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:32.024075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:32.024381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:32.024402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:32.031341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:32.031666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.523 [2024-12-16 02:57:32.031687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.523 [2024-12-16 02:57:32.038135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.523 [2024-12-16 02:57:32.038456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.038476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.044750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.045082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.045103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.051397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.051674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.051694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.058435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.058744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.058764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.065053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.065360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.065381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.071981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.072130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.072148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.078397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.078694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.078714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.085229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.085420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.085441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.092480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.092759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.092780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.099049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.099276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.099296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.105024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.105236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.105257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.110536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.110747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.110768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.116400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.116664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.116685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.121407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.121593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.121613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.125736] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.125956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.125977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.130529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.130744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.130764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.135053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.135265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.135285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.139062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.139273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.139297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.142909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.143104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.143124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.146707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.146902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.146922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.150720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.150924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.150943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.154713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.154911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.154931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.158764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.158967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.158986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.162748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.162950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.162969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.166409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.166596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.166616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.170013] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.170208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.170228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.173620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.173814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.173838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.524 [2024-12-16 02:57:32.177300] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.524 [2024-12-16 02:57:32.177513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.524 [2024-12-16 02:57:32.177534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.785 [2024-12-16 02:57:32.181047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.785 [2024-12-16 02:57:32.181245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.785 [2024-12-16 02:57:32.181265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.785 [2024-12-16 02:57:32.184813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.785 [2024-12-16 02:57:32.185025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.785 [2024-12-16 02:57:32.185045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.785 [2024-12-16 02:57:32.188662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.785 [2024-12-16 02:57:32.188861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.785 [2024-12-16 02:57:32.188881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.785 [2024-12-16 02:57:32.192865] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.785 [2024-12-16 02:57:32.193047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.785 [2024-12-16 02:57:32.193066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.785 [2024-12-16 02:57:32.196755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.785 [2024-12-16 02:57:32.196960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.785 [2024-12-16 02:57:32.196980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.785 [2024-12-16 02:57:32.200745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.785 [2024-12-16 02:57:32.200956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.785 [2024-12-16 02:57:32.200976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.785 [2024-12-16 02:57:32.204663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.785 [2024-12-16 02:57:32.204845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.785 [2024-12-16 02:57:32.204871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.785 [2024-12-16 02:57:32.208660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.785 [2024-12-16 02:57:32.208877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.785 [2024-12-16 02:57:32.208898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.785 [2024-12-16 02:57:32.212831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.785 [2024-12-16 02:57:32.213015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.785 [2024-12-16 02:57:32.213036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.785 [2024-12-16 02:57:32.216729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.785 [2024-12-16 02:57:32.216934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.785 [2024-12-16 02:57:32.216954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.785 [2024-12-16 02:57:32.220664] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.785 [2024-12-16 02:57:32.220856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.785 [2024-12-16 02:57:32.220876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.224592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.224770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.224790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.228520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.228708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.228727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.232373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.232564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.232584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.236268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.236445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.236465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.240185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.240374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.240394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.244014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.244201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.244221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.247608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.247802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.247822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.251229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.251419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.251438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.254845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.255037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.255056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.258410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.258598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.258619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.262023] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.262207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.262227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.265944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.266103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.266123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.270465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.270570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.270590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.274995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.275197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.275219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.278927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.279124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.279144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.282829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.283043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.283062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.286754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.286953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.286973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.290704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.290905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.290924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.294566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.294754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.294774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.298472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.298662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.298682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.302378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.302557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.302576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.306276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.306446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.306466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.310230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.310421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.310441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.314114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.314283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.314303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.318052] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.318507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.318527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.322244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.322424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.322445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.326137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.326310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.786 [2024-12-16 02:57:32.326331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.786 [2024-12-16 02:57:32.330022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.786 [2024-12-16 02:57:32.330215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.330234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.334178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.334419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.334439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.338967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.339236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.339256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.344541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.344736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.344756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.349637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.349790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.349809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.355740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.356035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.356055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.361556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.361756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.361776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.367989] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.368184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.368204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.374587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.374877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.374899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.380935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.381118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.381138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.387977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.388283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.388303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.394425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.394668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.394688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.401003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.401169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.401193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.407188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.407410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.407431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.414063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.414307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.414327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.420733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.420877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.420897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.426986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.427251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.427272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.433376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.433588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.433608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:01.787 [2024-12-16 02:57:32.439479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:01.787 [2024-12-16 02:57:32.439771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.787 [2024-12-16 02:57:32.439791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.445381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.445645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.445666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.451898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.452002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.452036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.457622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.457788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.457808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.463126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.463355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.463376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.468654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.468832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.468859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.473788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.473963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.473984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.478388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.478560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.478579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.482521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.482716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.482737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.486474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.486662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.486682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.490319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.490504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.490523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.494187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.494372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.494392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.498074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.498262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.498282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.501967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.502156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.502175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.505943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.506132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.506153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.510610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.510834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.510859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.516438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.516591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.516612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.522375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.522674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.522695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.528281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.048 [2024-12-16 02:57:32.528476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.048 [2024-12-16 02:57:32.528496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.048 [2024-12-16 02:57:32.534742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.534945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.534965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.541765] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.541986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.542011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.548170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.548441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.548461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.554905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.555168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.555189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.561512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.561812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.561832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.568339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.568444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.568463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.575334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.575597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.575619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.581792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.581975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.581996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.589099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.589307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.589327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.595097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.595374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.595394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.600798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.601058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.601078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.606334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.606594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.606614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.612427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.612723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.612742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.618334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.618547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.618568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.624408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.624592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.624612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.630581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.630713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.630732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.637004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.637153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.637174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.643983] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.644173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.644193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.649691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.649868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.649888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.655111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.655277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.655297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.659974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.660131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.660151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.664081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.664258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.664278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.668065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.668235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.668255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.672284] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.672474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.672494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.676166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.676359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.676379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.680122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.680316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.680336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:02.049 [2024-12-16 02:57:32.684190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.684376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.049 [2024-12-16 02:57:32.684396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:02.049 5946.50 IOPS, 743.31 MiB/s [2024-12-16T01:57:32.708Z] [2024-12-16 02:57:32.688979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7de2a0) with pdu=0x200016eff3c8 00:36:02.049 [2024-12-16 02:57:32.689101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.050 [2024-12-16 02:57:32.689126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:02.050 00:36:02.050 Latency(us) 00:36:02.050 [2024-12-16T01:57:32.709Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:02.050 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:02.050 nvme0n1 : 2.00 5946.54 743.32 0.00 0.00 2686.49 1451.15 7957.94 00:36:02.050 [2024-12-16T01:57:32.709Z] =================================================================================================================== 00:36:02.050 [2024-12-16T01:57:32.709Z] Total : 5946.54 743.32 0.00 0.00 2686.49 1451.15 7957.94 00:36:02.050 { 00:36:02.050 "results": [ 00:36:02.050 { 00:36:02.050 "job": "nvme0n1", 00:36:02.050 "core_mask": "0x2", 00:36:02.050 "workload": "randwrite", 00:36:02.050 "status": "finished", 00:36:02.050 "queue_depth": 16, 00:36:02.050 "io_size": 131072, 00:36:02.050 "runtime": 2.003349, 00:36:02.050 "iops": 5946.54251455937, 00:36:02.050 "mibps": 743.3178143199212, 00:36:02.050 "io_failed": 0, 00:36:02.050 "io_timeout": 0, 00:36:02.050 "avg_latency_us": 2686.4935301571313, 00:36:02.050 "min_latency_us": 1451.1542857142856, 00:36:02.050 "max_latency_us": 7957.942857142857 00:36:02.050 } 00:36:02.050 ], 00:36:02.050 "core_count": 1 00:36:02.050 } 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:02.309 | .driver_specific 00:36:02.309 | .nvme_error 00:36:02.309 | .status_code 00:36:02.309 | .command_transient_transport_error' 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 385 > 0 )) 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1199693 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1199693 ']' 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1199693 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1199693 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1199693' 00:36:02.309 killing process with pid 1199693 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1199693 00:36:02.309 Received shutdown signal, test time was about 2.000000 seconds 00:36:02.309 00:36:02.309 Latency(us) 00:36:02.309 [2024-12-16T01:57:32.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:02.309 [2024-12-16T01:57:32.968Z] =================================================================================================================== 00:36:02.309 [2024-12-16T01:57:32.968Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:02.309 02:57:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1199693 00:36:02.568 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1198076 00:36:02.568 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1198076 ']' 00:36:02.568 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1198076 00:36:02.568 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:02.568 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:02.568 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1198076 00:36:02.568 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:02.568 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:02.568 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1198076' 00:36:02.568 killing process with pid 1198076 00:36:02.568 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1198076 00:36:02.568 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1198076 00:36:02.828 00:36:02.828 real 0m13.898s 00:36:02.828 user 0m26.700s 00:36:02.828 sys 0m4.424s 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:02.828 ************************************ 00:36:02.828 END TEST nvmf_digest_error 00:36:02.828 ************************************ 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:02.828 rmmod nvme_tcp 00:36:02.828 rmmod nvme_fabrics 00:36:02.828 rmmod nvme_keyring 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1198076 ']' 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1198076 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1198076 ']' 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1198076 00:36:02.828 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1198076) - No such process 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1198076 is not found' 00:36:02.828 Process with pid 1198076 is not found 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:02.828 02:57:33 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:05.469 00:36:05.469 real 0m36.478s 00:36:05.469 user 0m55.471s 00:36:05.469 sys 0m13.579s 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:05.469 ************************************ 00:36:05.469 END TEST nvmf_digest 00:36:05.469 ************************************ 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:05.469 ************************************ 00:36:05.469 START TEST nvmf_bdevperf 00:36:05.469 ************************************ 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:05.469 * Looking for test storage... 00:36:05.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:05.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.469 --rc genhtml_branch_coverage=1 00:36:05.469 --rc genhtml_function_coverage=1 00:36:05.469 --rc genhtml_legend=1 00:36:05.469 --rc geninfo_all_blocks=1 00:36:05.469 --rc geninfo_unexecuted_blocks=1 00:36:05.469 00:36:05.469 ' 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:05.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.469 --rc genhtml_branch_coverage=1 00:36:05.469 --rc genhtml_function_coverage=1 00:36:05.469 --rc genhtml_legend=1 00:36:05.469 --rc geninfo_all_blocks=1 00:36:05.469 --rc geninfo_unexecuted_blocks=1 00:36:05.469 00:36:05.469 ' 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:05.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.469 --rc genhtml_branch_coverage=1 00:36:05.469 --rc genhtml_function_coverage=1 00:36:05.469 --rc genhtml_legend=1 00:36:05.469 --rc geninfo_all_blocks=1 00:36:05.469 --rc geninfo_unexecuted_blocks=1 00:36:05.469 00:36:05.469 ' 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:05.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:05.469 --rc genhtml_branch_coverage=1 00:36:05.469 --rc genhtml_function_coverage=1 00:36:05.469 --rc genhtml_legend=1 00:36:05.469 --rc geninfo_all_blocks=1 00:36:05.469 --rc geninfo_unexecuted_blocks=1 00:36:05.469 00:36:05.469 ' 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:05.469 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:05.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:05.470 02:57:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:10.742 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:10.742 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:10.742 Found net devices under 0000:af:00.0: cvl_0_0 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:10.742 Found net devices under 0000:af:00.1: cvl_0_1 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:10.742 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:11.001 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:11.002 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:11.002 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:36:11.002 00:36:11.002 --- 10.0.0.2 ping statistics --- 00:36:11.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.002 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:11.002 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:11.002 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:36:11.002 00:36:11.002 --- 10.0.0.1 ping statistics --- 00:36:11.002 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:11.002 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1203656 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1203656 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1203656 ']' 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:11.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:11.002 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:11.261 [2024-12-16 02:57:41.690921] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:11.261 [2024-12-16 02:57:41.690965] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:11.261 [2024-12-16 02:57:41.755285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:11.261 [2024-12-16 02:57:41.778101] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:11.261 [2024-12-16 02:57:41.778139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:11.261 [2024-12-16 02:57:41.778146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:11.261 [2024-12-16 02:57:41.778152] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:11.261 [2024-12-16 02:57:41.778157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:11.261 [2024-12-16 02:57:41.779419] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:11.261 [2024-12-16 02:57:41.779523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.261 [2024-12-16 02:57:41.779525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:11.261 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:11.261 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:11.261 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:11.261 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:11.261 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:11.261 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:11.261 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:11.261 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.261 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:11.261 [2024-12-16 02:57:41.906055] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:11.261 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.261 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:11.261 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.261 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:11.520 Malloc0 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:11.520 [2024-12-16 02:57:41.975069] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:11.520 { 00:36:11.520 "params": { 00:36:11.520 "name": "Nvme$subsystem", 00:36:11.520 "trtype": "$TEST_TRANSPORT", 00:36:11.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:11.520 "adrfam": "ipv4", 00:36:11.520 "trsvcid": "$NVMF_PORT", 00:36:11.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:11.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:11.520 "hdgst": ${hdgst:-false}, 00:36:11.520 "ddgst": ${ddgst:-false} 00:36:11.520 }, 00:36:11.520 "method": "bdev_nvme_attach_controller" 00:36:11.520 } 00:36:11.520 EOF 00:36:11.520 )") 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:11.520 02:57:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:11.520 "params": { 00:36:11.520 "name": "Nvme1", 00:36:11.520 "trtype": "tcp", 00:36:11.520 "traddr": "10.0.0.2", 00:36:11.520 "adrfam": "ipv4", 00:36:11.520 "trsvcid": "4420", 00:36:11.520 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:11.520 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:11.520 "hdgst": false, 00:36:11.520 "ddgst": false 00:36:11.520 }, 00:36:11.520 "method": "bdev_nvme_attach_controller" 00:36:11.520 }' 00:36:11.520 [2024-12-16 02:57:42.026191] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:11.520 [2024-12-16 02:57:42.026238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203757 ] 00:36:11.520 [2024-12-16 02:57:42.102487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:11.520 [2024-12-16 02:57:42.124955] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:11.779 Running I/O for 1 seconds... 00:36:13.153 11390.00 IOPS, 44.49 MiB/s 00:36:13.153 Latency(us) 00:36:13.153 [2024-12-16T01:57:43.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:13.153 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:13.153 Verification LBA range: start 0x0 length 0x4000 00:36:13.153 Nvme1n1 : 1.01 11459.82 44.76 0.00 0.00 11114.19 2215.74 10860.25 00:36:13.153 [2024-12-16T01:57:43.812Z] =================================================================================================================== 00:36:13.153 [2024-12-16T01:57:43.812Z] Total : 11459.82 44.76 0.00 0.00 11114.19 2215.74 10860.25 00:36:13.153 02:57:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1204066 00:36:13.153 02:57:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:13.153 02:57:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:13.153 02:57:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:13.153 02:57:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:13.153 02:57:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:13.153 02:57:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:13.153 02:57:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:13.153 { 00:36:13.153 "params": { 00:36:13.153 "name": "Nvme$subsystem", 00:36:13.153 "trtype": "$TEST_TRANSPORT", 00:36:13.153 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:13.153 "adrfam": "ipv4", 00:36:13.153 "trsvcid": "$NVMF_PORT", 00:36:13.153 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:13.153 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:13.153 "hdgst": ${hdgst:-false}, 00:36:13.153 "ddgst": ${ddgst:-false} 00:36:13.153 }, 00:36:13.153 "method": "bdev_nvme_attach_controller" 00:36:13.153 } 00:36:13.153 EOF 00:36:13.153 )") 00:36:13.153 02:57:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:13.153 02:57:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:13.153 02:57:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:13.153 02:57:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:13.153 "params": { 00:36:13.153 "name": "Nvme1", 00:36:13.153 "trtype": "tcp", 00:36:13.153 "traddr": "10.0.0.2", 00:36:13.153 "adrfam": "ipv4", 00:36:13.153 "trsvcid": "4420", 00:36:13.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:13.153 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:13.153 "hdgst": false, 00:36:13.153 "ddgst": false 00:36:13.153 }, 00:36:13.153 "method": "bdev_nvme_attach_controller" 00:36:13.153 }' 00:36:13.153 [2024-12-16 02:57:43.609611] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:13.153 [2024-12-16 02:57:43.609663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204066 ] 00:36:13.153 [2024-12-16 02:57:43.687351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:13.153 [2024-12-16 02:57:43.707417] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:13.412 Running I/O for 15 seconds... 00:36:15.721 11556.00 IOPS, 45.14 MiB/s [2024-12-16T01:57:46.641Z] 11531.00 IOPS, 45.04 MiB/s [2024-12-16T01:57:46.641Z] 02:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1203656 00:36:15.982 02:57:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:15.982 [2024-12-16 02:57:46.584522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.982 [2024-12-16 02:57:46.584561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.982 [2024-12-16 02:57:46.584578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.584990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.584999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:104088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:104096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:104112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.983 [2024-12-16 02:57:46.585312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.983 [2024-12-16 02:57:46.585326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:104120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:104128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:104136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:104144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:104152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:104160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:104176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:104184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:104192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:104200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:104208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:104216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:104224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:104232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:104240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:104248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:104256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:104264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:104272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.984 [2024-12-16 02:57:46.585645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.984 [2024-12-16 02:57:46.585659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.984 [2024-12-16 02:57:46.585673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.984 [2024-12-16 02:57:46.585688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.984 [2024-12-16 02:57:46.585703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:104280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:104296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:104312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:104320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:104328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:104336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:104344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:104352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:104368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:104376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:104384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.984 [2024-12-16 02:57:46.585924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.984 [2024-12-16 02:57:46.585933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:104392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.585939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.585947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:104400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.585953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.585961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:104408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.585968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.585977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:104416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.585984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.585991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.585998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:104432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.586012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:104440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.586027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.586041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:104456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.586063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:104464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.586078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:104472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.586094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:104480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.586108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:104488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.586124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:104496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.586140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:104504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.586154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.586169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.985 [2024-12-16 02:57:46.586513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:104520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:15.985 [2024-12-16 02:57:46.586528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.985 [2024-12-16 02:57:46.586536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.986 [2024-12-16 02:57:46.586545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.986 [2024-12-16 02:57:46.586553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.986 [2024-12-16 02:57:46.586560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.986 [2024-12-16 02:57:46.586568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.986 [2024-12-16 02:57:46.586574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.986 [2024-12-16 02:57:46.586582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.986 [2024-12-16 02:57:46.586589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.986 [2024-12-16 02:57:46.586598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.986 [2024-12-16 02:57:46.586604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.986 [2024-12-16 02:57:46.586612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.986 [2024-12-16 02:57:46.586618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.986 [2024-12-16 02:57:46.586626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.986 [2024-12-16 02:57:46.586632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.986 [2024-12-16 02:57:46.586641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.986 [2024-12-16 02:57:46.586648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.986 [2024-12-16 02:57:46.586656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:15.986 [2024-12-16 02:57:46.586664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.986 [2024-12-16 02:57:46.586672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b0920 is same with the state(6) to be set 00:36:15.986 [2024-12-16 02:57:46.586680] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:15.986 [2024-12-16 02:57:46.586686] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:15.986 [2024-12-16 02:57:46.586693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103800 len:8 PRP1 0x0 PRP2 0x0 00:36:15.986 [2024-12-16 02:57:46.586700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:15.986 [2024-12-16 02:57:46.589497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:15.986 [2024-12-16 02:57:46.589550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:15.986 [2024-12-16 02:57:46.590158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.986 [2024-12-16 02:57:46.590176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:15.986 [2024-12-16 02:57:46.590184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:15.986 [2024-12-16 02:57:46.590360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:15.986 [2024-12-16 02:57:46.590533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:15.986 [2024-12-16 02:57:46.590541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:15.986 [2024-12-16 02:57:46.590549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:15.986 [2024-12-16 02:57:46.590557] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:15.986 [2024-12-16 02:57:46.602605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:15.986 [2024-12-16 02:57:46.603039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.986 [2024-12-16 02:57:46.603059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:15.986 [2024-12-16 02:57:46.603066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:15.986 [2024-12-16 02:57:46.603235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:15.986 [2024-12-16 02:57:46.603404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:15.986 [2024-12-16 02:57:46.603414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:15.986 [2024-12-16 02:57:46.603421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:15.986 [2024-12-16 02:57:46.603429] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:15.986 [2024-12-16 02:57:46.615359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:15.986 [2024-12-16 02:57:46.615795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.986 [2024-12-16 02:57:46.615842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:15.986 [2024-12-16 02:57:46.615894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:15.986 [2024-12-16 02:57:46.616406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:15.986 [2024-12-16 02:57:46.616575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:15.986 [2024-12-16 02:57:46.616583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:15.986 [2024-12-16 02:57:46.616594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:15.986 [2024-12-16 02:57:46.616602] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:15.986 [2024-12-16 02:57:46.628128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:15.986 [2024-12-16 02:57:46.628441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:15.986 [2024-12-16 02:57:46.628458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:15.986 [2024-12-16 02:57:46.628465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:15.986 [2024-12-16 02:57:46.628625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:15.986 [2024-12-16 02:57:46.628784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:15.986 [2024-12-16 02:57:46.628793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:15.986 [2024-12-16 02:57:46.628800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:15.986 [2024-12-16 02:57:46.628806] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.247 [2024-12-16 02:57:46.641240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.247 [2024-12-16 02:57:46.641661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.247 [2024-12-16 02:57:46.641707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.247 [2024-12-16 02:57:46.641731] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.247 [2024-12-16 02:57:46.642334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.247 [2024-12-16 02:57:46.642759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.247 [2024-12-16 02:57:46.642768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.247 [2024-12-16 02:57:46.642775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.247 [2024-12-16 02:57:46.642782] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.247 [2024-12-16 02:57:46.654000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.247 [2024-12-16 02:57:46.654353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.247 [2024-12-16 02:57:46.654370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.247 [2024-12-16 02:57:46.654377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.247 [2024-12-16 02:57:46.654535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.247 [2024-12-16 02:57:46.654695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.247 [2024-12-16 02:57:46.654705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.247 [2024-12-16 02:57:46.654712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.247 [2024-12-16 02:57:46.654719] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.247 [2024-12-16 02:57:46.666744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.247 [2024-12-16 02:57:46.667177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.247 [2024-12-16 02:57:46.667222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.247 [2024-12-16 02:57:46.667245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.247 [2024-12-16 02:57:46.667760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.247 [2024-12-16 02:57:46.667944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.247 [2024-12-16 02:57:46.667954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.247 [2024-12-16 02:57:46.667960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.247 [2024-12-16 02:57:46.667968] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.247 [2024-12-16 02:57:46.679591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.247 [2024-12-16 02:57:46.679980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.247 [2024-12-16 02:57:46.679998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.247 [2024-12-16 02:57:46.680005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.247 [2024-12-16 02:57:46.680165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.247 [2024-12-16 02:57:46.680324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.247 [2024-12-16 02:57:46.680332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.247 [2024-12-16 02:57:46.680339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.247 [2024-12-16 02:57:46.680346] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.247 [2024-12-16 02:57:46.692456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.247 [2024-12-16 02:57:46.692876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.247 [2024-12-16 02:57:46.692922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.247 [2024-12-16 02:57:46.692946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.247 [2024-12-16 02:57:46.693527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.247 [2024-12-16 02:57:46.693902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.247 [2024-12-16 02:57:46.693912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.247 [2024-12-16 02:57:46.693919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.247 [2024-12-16 02:57:46.693926] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.247 [2024-12-16 02:57:46.705224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.247 [2024-12-16 02:57:46.705499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.247 [2024-12-16 02:57:46.705516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.247 [2024-12-16 02:57:46.705527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.247 [2024-12-16 02:57:46.705687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.247 [2024-12-16 02:57:46.705851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.247 [2024-12-16 02:57:46.705861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.247 [2024-12-16 02:57:46.705868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.247 [2024-12-16 02:57:46.705891] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.247 [2024-12-16 02:57:46.718012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.247 [2024-12-16 02:57:46.718351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.247 [2024-12-16 02:57:46.718397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.247 [2024-12-16 02:57:46.718421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.247 [2024-12-16 02:57:46.718905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.247 [2024-12-16 02:57:46.719077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.248 [2024-12-16 02:57:46.719086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.248 [2024-12-16 02:57:46.719093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.248 [2024-12-16 02:57:46.719099] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.248 [2024-12-16 02:57:46.730859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.248 [2024-12-16 02:57:46.731182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.248 [2024-12-16 02:57:46.731199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.248 [2024-12-16 02:57:46.731206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.248 [2024-12-16 02:57:46.731365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.248 [2024-12-16 02:57:46.731525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.248 [2024-12-16 02:57:46.731534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.248 [2024-12-16 02:57:46.731540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.248 [2024-12-16 02:57:46.731546] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.248 [2024-12-16 02:57:46.743661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.248 [2024-12-16 02:57:46.744007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.248 [2024-12-16 02:57:46.744024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.248 [2024-12-16 02:57:46.744031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.248 [2024-12-16 02:57:46.744189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.248 [2024-12-16 02:57:46.744352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.248 [2024-12-16 02:57:46.744361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.248 [2024-12-16 02:57:46.744367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.248 [2024-12-16 02:57:46.744373] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.248 [2024-12-16 02:57:46.756380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.248 [2024-12-16 02:57:46.756773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.248 [2024-12-16 02:57:46.756790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.248 [2024-12-16 02:57:46.756798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.248 [2024-12-16 02:57:46.756982] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.248 [2024-12-16 02:57:46.757152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.248 [2024-12-16 02:57:46.757161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.248 [2024-12-16 02:57:46.757168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.248 [2024-12-16 02:57:46.757175] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.248 [2024-12-16 02:57:46.769156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.248 [2024-12-16 02:57:46.769566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.248 [2024-12-16 02:57:46.769583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.248 [2024-12-16 02:57:46.769590] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.248 [2024-12-16 02:57:46.769749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.248 [2024-12-16 02:57:46.769931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.248 [2024-12-16 02:57:46.769942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.248 [2024-12-16 02:57:46.769948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.248 [2024-12-16 02:57:46.769954] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.248 [2024-12-16 02:57:46.781979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.248 [2024-12-16 02:57:46.782393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.248 [2024-12-16 02:57:46.782410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.248 [2024-12-16 02:57:46.782417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.248 [2024-12-16 02:57:46.782576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.248 [2024-12-16 02:57:46.782736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.248 [2024-12-16 02:57:46.782745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.248 [2024-12-16 02:57:46.782754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.248 [2024-12-16 02:57:46.782761] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.248 [2024-12-16 02:57:46.794770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.248 [2024-12-16 02:57:46.795189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.248 [2024-12-16 02:57:46.795235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.248 [2024-12-16 02:57:46.795259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.248 [2024-12-16 02:57:46.795747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.248 [2024-12-16 02:57:46.795930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.248 [2024-12-16 02:57:46.795940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.248 [2024-12-16 02:57:46.795948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.248 [2024-12-16 02:57:46.795955] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.248 [2024-12-16 02:57:46.807621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.248 [2024-12-16 02:57:46.808068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.248 [2024-12-16 02:57:46.808114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.248 [2024-12-16 02:57:46.808138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.248 [2024-12-16 02:57:46.808721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.248 [2024-12-16 02:57:46.808972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.248 [2024-12-16 02:57:46.808982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.248 [2024-12-16 02:57:46.808988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.248 [2024-12-16 02:57:46.808995] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.248 [2024-12-16 02:57:46.820688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.248 [2024-12-16 02:57:46.821035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.248 [2024-12-16 02:57:46.821053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.248 [2024-12-16 02:57:46.821061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.248 [2024-12-16 02:57:46.821233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.248 [2024-12-16 02:57:46.821406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.248 [2024-12-16 02:57:46.821416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.248 [2024-12-16 02:57:46.821422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.248 [2024-12-16 02:57:46.821429] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.248 [2024-12-16 02:57:46.833499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.248 [2024-12-16 02:57:46.833923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.248 [2024-12-16 02:57:46.833968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.248 [2024-12-16 02:57:46.833992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.248 [2024-12-16 02:57:46.834574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.248 [2024-12-16 02:57:46.834943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.248 [2024-12-16 02:57:46.834953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.248 [2024-12-16 02:57:46.834959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.248 [2024-12-16 02:57:46.834966] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.248 [2024-12-16 02:57:46.846495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.248 [2024-12-16 02:57:46.846943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.249 [2024-12-16 02:57:46.846989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.249 [2024-12-16 02:57:46.847012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.249 [2024-12-16 02:57:46.847593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.249 [2024-12-16 02:57:46.848188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.249 [2024-12-16 02:57:46.848214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.249 [2024-12-16 02:57:46.848237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.249 [2024-12-16 02:57:46.848244] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.249 [2024-12-16 02:57:46.859454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.249 [2024-12-16 02:57:46.859890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.249 [2024-12-16 02:57:46.859907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.249 [2024-12-16 02:57:46.859915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.249 [2024-12-16 02:57:46.860088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.249 [2024-12-16 02:57:46.860266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.249 [2024-12-16 02:57:46.860274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.249 [2024-12-16 02:57:46.860281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.249 [2024-12-16 02:57:46.860287] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.249 [2024-12-16 02:57:46.872333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.249 [2024-12-16 02:57:46.872670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.249 [2024-12-16 02:57:46.872703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.249 [2024-12-16 02:57:46.872736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.249 [2024-12-16 02:57:46.873281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.249 [2024-12-16 02:57:46.873455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.249 [2024-12-16 02:57:46.873463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.249 [2024-12-16 02:57:46.873469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.249 [2024-12-16 02:57:46.873476] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.249 [2024-12-16 02:57:46.885158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.249 [2024-12-16 02:57:46.885524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.249 [2024-12-16 02:57:46.885540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.249 [2024-12-16 02:57:46.885547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.249 [2024-12-16 02:57:46.885706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.249 [2024-12-16 02:57:46.885870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.249 [2024-12-16 02:57:46.885895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.249 [2024-12-16 02:57:46.885902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.249 [2024-12-16 02:57:46.885908] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.249 [2024-12-16 02:57:46.898142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.249 [2024-12-16 02:57:46.898561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.249 [2024-12-16 02:57:46.898577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.249 [2024-12-16 02:57:46.898584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.249 [2024-12-16 02:57:46.898751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.249 [2024-12-16 02:57:46.898923] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.249 [2024-12-16 02:57:46.898932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.249 [2024-12-16 02:57:46.898938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.249 [2024-12-16 02:57:46.898944] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.509 [2024-12-16 02:57:46.911001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.509 [2024-12-16 02:57:46.911267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-12-16 02:57:46.911284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.509 [2024-12-16 02:57:46.911291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.509 [2024-12-16 02:57:46.911459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.509 [2024-12-16 02:57:46.911633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.509 [2024-12-16 02:57:46.911641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.509 [2024-12-16 02:57:46.911647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.509 [2024-12-16 02:57:46.911653] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.509 [2024-12-16 02:57:46.923751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.509 [2024-12-16 02:57:46.924071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-12-16 02:57:46.924100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.509 [2024-12-16 02:57:46.924107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.509 [2024-12-16 02:57:46.924290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.509 [2024-12-16 02:57:46.924463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.509 [2024-12-16 02:57:46.924471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.509 [2024-12-16 02:57:46.924477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.509 [2024-12-16 02:57:46.924483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.509 [2024-12-16 02:57:46.936495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.509 [2024-12-16 02:57:46.936891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-12-16 02:57:46.936937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.509 [2024-12-16 02:57:46.936960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.509 [2024-12-16 02:57:46.937419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.509 [2024-12-16 02:57:46.937578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.509 [2024-12-16 02:57:46.937585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.509 [2024-12-16 02:57:46.937591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.509 [2024-12-16 02:57:46.937596] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.509 [2024-12-16 02:57:46.949309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.509 [2024-12-16 02:57:46.949733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.509 [2024-12-16 02:57:46.949778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.509 [2024-12-16 02:57:46.949801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.509 [2024-12-16 02:57:46.950302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.509 [2024-12-16 02:57:46.950471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.510 [2024-12-16 02:57:46.950479] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.510 [2024-12-16 02:57:46.950488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.510 [2024-12-16 02:57:46.950494] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.510 [2024-12-16 02:57:46.962129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.510 [2024-12-16 02:57:46.962518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-12-16 02:57:46.962534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.510 [2024-12-16 02:57:46.962541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.510 [2024-12-16 02:57:46.962698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.510 [2024-12-16 02:57:46.962862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.510 [2024-12-16 02:57:46.962870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.510 [2024-12-16 02:57:46.962893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.510 [2024-12-16 02:57:46.962900] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.510 [2024-12-16 02:57:46.974983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.510 [2024-12-16 02:57:46.975400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-12-16 02:57:46.975415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.510 [2024-12-16 02:57:46.975422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.510 [2024-12-16 02:57:46.975580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.510 [2024-12-16 02:57:46.975738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.510 [2024-12-16 02:57:46.975746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.510 [2024-12-16 02:57:46.975751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.510 [2024-12-16 02:57:46.975757] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.510 [2024-12-16 02:57:46.987766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.510 [2024-12-16 02:57:46.988216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-12-16 02:57:46.988262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.510 [2024-12-16 02:57:46.988285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.510 [2024-12-16 02:57:46.988880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.510 [2024-12-16 02:57:46.989291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.510 [2024-12-16 02:57:46.989308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.510 [2024-12-16 02:57:46.989321] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.510 [2024-12-16 02:57:46.989334] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.510 [2024-12-16 02:57:47.002882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.510 [2024-12-16 02:57:47.003408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-12-16 02:57:47.003465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.510 [2024-12-16 02:57:47.003488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.510 [2024-12-16 02:57:47.004086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.510 [2024-12-16 02:57:47.004594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.510 [2024-12-16 02:57:47.004605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.510 [2024-12-16 02:57:47.004615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.510 [2024-12-16 02:57:47.004623] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.510 [2024-12-16 02:57:47.015836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.510 [2024-12-16 02:57:47.016200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-12-16 02:57:47.016246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.510 [2024-12-16 02:57:47.016268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.510 [2024-12-16 02:57:47.016877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.510 [2024-12-16 02:57:47.017463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.510 [2024-12-16 02:57:47.017494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.510 [2024-12-16 02:57:47.017501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.510 [2024-12-16 02:57:47.017507] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.510 [2024-12-16 02:57:47.028682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.510 [2024-12-16 02:57:47.029109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-12-16 02:57:47.029153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.510 [2024-12-16 02:57:47.029176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.510 [2024-12-16 02:57:47.029706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.510 [2024-12-16 02:57:47.029878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.510 [2024-12-16 02:57:47.029887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.510 [2024-12-16 02:57:47.029893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.510 [2024-12-16 02:57:47.029899] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.510 9774.00 IOPS, 38.18 MiB/s [2024-12-16T01:57:47.169Z] [2024-12-16 02:57:47.041442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.510 [2024-12-16 02:57:47.041782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-12-16 02:57:47.041798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.510 [2024-12-16 02:57:47.041807] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.510 [2024-12-16 02:57:47.041993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.510 [2024-12-16 02:57:47.042161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.510 [2024-12-16 02:57:47.042169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.510 [2024-12-16 02:57:47.042174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.510 [2024-12-16 02:57:47.042180] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.510 [2024-12-16 02:57:47.054160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.510 [2024-12-16 02:57:47.054575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-12-16 02:57:47.054591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.510 [2024-12-16 02:57:47.054597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.510 [2024-12-16 02:57:47.054756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.510 [2024-12-16 02:57:47.054938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.510 [2024-12-16 02:57:47.054947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.510 [2024-12-16 02:57:47.054953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.510 [2024-12-16 02:57:47.054959] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.510 [2024-12-16 02:57:47.066892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.510 [2024-12-16 02:57:47.067301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.510 [2024-12-16 02:57:47.067316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.510 [2024-12-16 02:57:47.067323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.510 [2024-12-16 02:57:47.067481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.510 [2024-12-16 02:57:47.067639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.510 [2024-12-16 02:57:47.067647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.510 [2024-12-16 02:57:47.067652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.510 [2024-12-16 02:57:47.067658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.510 [2024-12-16 02:57:47.079636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.510 [2024-12-16 02:57:47.080050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-12-16 02:57:47.080068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.511 [2024-12-16 02:57:47.080075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.511 [2024-12-16 02:57:47.080233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.511 [2024-12-16 02:57:47.080395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.511 [2024-12-16 02:57:47.080405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.511 [2024-12-16 02:57:47.080411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.511 [2024-12-16 02:57:47.080417] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.511 [2024-12-16 02:57:47.092469] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.511 [2024-12-16 02:57:47.092880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-12-16 02:57:47.092913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.511 [2024-12-16 02:57:47.092920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.511 [2024-12-16 02:57:47.093087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.511 [2024-12-16 02:57:47.093256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.511 [2024-12-16 02:57:47.093264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.511 [2024-12-16 02:57:47.093271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.511 [2024-12-16 02:57:47.093277] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.511 [2024-12-16 02:57:47.105403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.511 [2024-12-16 02:57:47.105801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-12-16 02:57:47.105818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.511 [2024-12-16 02:57:47.105825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.511 [2024-12-16 02:57:47.106000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.511 [2024-12-16 02:57:47.106168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.511 [2024-12-16 02:57:47.106177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.511 [2024-12-16 02:57:47.106184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.511 [2024-12-16 02:57:47.106190] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.511 [2024-12-16 02:57:47.118206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.511 [2024-12-16 02:57:47.118573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-12-16 02:57:47.118619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.511 [2024-12-16 02:57:47.118642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.511 [2024-12-16 02:57:47.119074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.511 [2024-12-16 02:57:47.119244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.511 [2024-12-16 02:57:47.119254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.511 [2024-12-16 02:57:47.119264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.511 [2024-12-16 02:57:47.119271] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.511 [2024-12-16 02:57:47.130960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.511 [2024-12-16 02:57:47.131368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-12-16 02:57:47.131385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.511 [2024-12-16 02:57:47.131393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.511 [2024-12-16 02:57:47.131551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.511 [2024-12-16 02:57:47.131710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.511 [2024-12-16 02:57:47.131720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.511 [2024-12-16 02:57:47.131725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.511 [2024-12-16 02:57:47.131731] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.511 [2024-12-16 02:57:47.143773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.511 [2024-12-16 02:57:47.144190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-12-16 02:57:47.144207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.511 [2024-12-16 02:57:47.144214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.511 [2024-12-16 02:57:47.144373] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.511 [2024-12-16 02:57:47.144532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.511 [2024-12-16 02:57:47.144541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.511 [2024-12-16 02:57:47.144547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.511 [2024-12-16 02:57:47.144553] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.511 [2024-12-16 02:57:47.156514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.511 [2024-12-16 02:57:47.156876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.511 [2024-12-16 02:57:47.156922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.511 [2024-12-16 02:57:47.156946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.511 [2024-12-16 02:57:47.157527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.511 [2024-12-16 02:57:47.158061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.511 [2024-12-16 02:57:47.158070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.511 [2024-12-16 02:57:47.158077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.511 [2024-12-16 02:57:47.158084] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.771 [2024-12-16 02:57:47.169544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.771 [2024-12-16 02:57:47.169906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.771 [2024-12-16 02:57:47.169952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.771 [2024-12-16 02:57:47.169976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.771 [2024-12-16 02:57:47.170558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.771 [2024-12-16 02:57:47.170985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.771 [2024-12-16 02:57:47.171003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.771 [2024-12-16 02:57:47.171017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.771 [2024-12-16 02:57:47.171031] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.771 [2024-12-16 02:57:47.184610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.771 [2024-12-16 02:57:47.185126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.771 [2024-12-16 02:57:47.185148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.771 [2024-12-16 02:57:47.185159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.771 [2024-12-16 02:57:47.185413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.771 [2024-12-16 02:57:47.185669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.771 [2024-12-16 02:57:47.185682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.771 [2024-12-16 02:57:47.185691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.771 [2024-12-16 02:57:47.185701] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.771 [2024-12-16 02:57:47.197632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.771 [2024-12-16 02:57:47.197989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.771 [2024-12-16 02:57:47.198008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.771 [2024-12-16 02:57:47.198016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.771 [2024-12-16 02:57:47.198188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.771 [2024-12-16 02:57:47.198362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.771 [2024-12-16 02:57:47.198372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.771 [2024-12-16 02:57:47.198378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.771 [2024-12-16 02:57:47.198385] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.771 [2024-12-16 02:57:47.210391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.771 [2024-12-16 02:57:47.210809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.771 [2024-12-16 02:57:47.210865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.772 [2024-12-16 02:57:47.210898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.772 [2024-12-16 02:57:47.211482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.772 [2024-12-16 02:57:47.211908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.772 [2024-12-16 02:57:47.211927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.772 [2024-12-16 02:57:47.211941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.772 [2024-12-16 02:57:47.211955] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.772 [2024-12-16 02:57:47.225336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.772 [2024-12-16 02:57:47.225790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.772 [2024-12-16 02:57:47.225813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.772 [2024-12-16 02:57:47.225824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.772 [2024-12-16 02:57:47.226086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.772 [2024-12-16 02:57:47.226342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.772 [2024-12-16 02:57:47.226355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.772 [2024-12-16 02:57:47.226364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.772 [2024-12-16 02:57:47.226374] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.772 [2024-12-16 02:57:47.238403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.772 [2024-12-16 02:57:47.238844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.772 [2024-12-16 02:57:47.238899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.772 [2024-12-16 02:57:47.238922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.772 [2024-12-16 02:57:47.239504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.772 [2024-12-16 02:57:47.240100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.772 [2024-12-16 02:57:47.240132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.772 [2024-12-16 02:57:47.240139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.772 [2024-12-16 02:57:47.240146] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.772 [2024-12-16 02:57:47.251131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.772 [2024-12-16 02:57:47.251516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.772 [2024-12-16 02:57:47.251533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.772 [2024-12-16 02:57:47.251541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.772 [2024-12-16 02:57:47.251709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.772 [2024-12-16 02:57:47.251888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.772 [2024-12-16 02:57:47.251899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.772 [2024-12-16 02:57:47.251906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.772 [2024-12-16 02:57:47.251913] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.772 [2024-12-16 02:57:47.263913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.772 [2024-12-16 02:57:47.264216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.772 [2024-12-16 02:57:47.264233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.772 [2024-12-16 02:57:47.264241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.772 [2024-12-16 02:57:47.264399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.772 [2024-12-16 02:57:47.264558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.772 [2024-12-16 02:57:47.264567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.772 [2024-12-16 02:57:47.264573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.772 [2024-12-16 02:57:47.264579] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.772 [2024-12-16 02:57:47.276649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.772 [2024-12-16 02:57:47.277060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.772 [2024-12-16 02:57:47.277077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.772 [2024-12-16 02:57:47.277084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.772 [2024-12-16 02:57:47.277244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.772 [2024-12-16 02:57:47.277403] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.772 [2024-12-16 02:57:47.277412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.772 [2024-12-16 02:57:47.277419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.772 [2024-12-16 02:57:47.277425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.772 [2024-12-16 02:57:47.289384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.772 [2024-12-16 02:57:47.289794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.772 [2024-12-16 02:57:47.289811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.772 [2024-12-16 02:57:47.289818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.772 [2024-12-16 02:57:47.290005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.772 [2024-12-16 02:57:47.290174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.772 [2024-12-16 02:57:47.290183] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.772 [2024-12-16 02:57:47.290192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.772 [2024-12-16 02:57:47.290199] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.772 [2024-12-16 02:57:47.302129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.772 [2024-12-16 02:57:47.302434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.772 [2024-12-16 02:57:47.302451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.772 [2024-12-16 02:57:47.302458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.772 [2024-12-16 02:57:47.302618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.772 [2024-12-16 02:57:47.302777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.772 [2024-12-16 02:57:47.302786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.772 [2024-12-16 02:57:47.302792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.772 [2024-12-16 02:57:47.302798] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.772 [2024-12-16 02:57:47.314997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.772 [2024-12-16 02:57:47.315420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.772 [2024-12-16 02:57:47.315465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.772 [2024-12-16 02:57:47.315488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.772 [2024-12-16 02:57:47.316064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.772 [2024-12-16 02:57:47.316234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.772 [2024-12-16 02:57:47.316244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.772 [2024-12-16 02:57:47.316250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.772 [2024-12-16 02:57:47.316257] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.772 [2024-12-16 02:57:47.327826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.772 [2024-12-16 02:57:47.328255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.772 [2024-12-16 02:57:47.328300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.772 [2024-12-16 02:57:47.328324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.772 [2024-12-16 02:57:47.328920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.772 [2024-12-16 02:57:47.329395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.772 [2024-12-16 02:57:47.329404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.772 [2024-12-16 02:57:47.329411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.772 [2024-12-16 02:57:47.329417] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.772 [2024-12-16 02:57:47.340683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.772 [2024-12-16 02:57:47.341026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.772 [2024-12-16 02:57:47.341043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.773 [2024-12-16 02:57:47.341051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.773 [2024-12-16 02:57:47.341209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.773 [2024-12-16 02:57:47.341369] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.773 [2024-12-16 02:57:47.341378] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.773 [2024-12-16 02:57:47.341385] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.773 [2024-12-16 02:57:47.341391] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.773 [2024-12-16 02:57:47.353406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.773 [2024-12-16 02:57:47.353733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.773 [2024-12-16 02:57:47.353750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.773 [2024-12-16 02:57:47.353757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.773 [2024-12-16 02:57:47.353940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.773 [2024-12-16 02:57:47.354128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.773 [2024-12-16 02:57:47.354137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.773 [2024-12-16 02:57:47.354143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.773 [2024-12-16 02:57:47.354149] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.773 [2024-12-16 02:57:47.366454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.773 [2024-12-16 02:57:47.366888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.773 [2024-12-16 02:57:47.366906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.773 [2024-12-16 02:57:47.366914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.773 [2024-12-16 02:57:47.367087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.773 [2024-12-16 02:57:47.367265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.773 [2024-12-16 02:57:47.367274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.773 [2024-12-16 02:57:47.367280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.773 [2024-12-16 02:57:47.367287] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.773 [2024-12-16 02:57:47.379184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.773 [2024-12-16 02:57:47.379583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.773 [2024-12-16 02:57:47.379627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.773 [2024-12-16 02:57:47.379657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.773 [2024-12-16 02:57:47.380254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.773 [2024-12-16 02:57:47.380641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.773 [2024-12-16 02:57:47.380650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.773 [2024-12-16 02:57:47.380657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.773 [2024-12-16 02:57:47.380663] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.773 [2024-12-16 02:57:47.392007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.773 [2024-12-16 02:57:47.392436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.773 [2024-12-16 02:57:47.392482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.773 [2024-12-16 02:57:47.392506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.773 [2024-12-16 02:57:47.393101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.773 [2024-12-16 02:57:47.393652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.773 [2024-12-16 02:57:47.393669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.773 [2024-12-16 02:57:47.393685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.773 [2024-12-16 02:57:47.393698] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.773 [2024-12-16 02:57:47.406950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.773 [2024-12-16 02:57:47.407466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.773 [2024-12-16 02:57:47.407489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.773 [2024-12-16 02:57:47.407499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.773 [2024-12-16 02:57:47.407753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.773 [2024-12-16 02:57:47.408017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.773 [2024-12-16 02:57:47.408031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.773 [2024-12-16 02:57:47.408040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.773 [2024-12-16 02:57:47.408050] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:16.773 [2024-12-16 02:57:47.420025] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:16.773 [2024-12-16 02:57:47.420414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:16.773 [2024-12-16 02:57:47.420460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:16.773 [2024-12-16 02:57:47.420484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:16.773 [2024-12-16 02:57:47.421081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:16.773 [2024-12-16 02:57:47.421576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:16.773 [2024-12-16 02:57:47.421586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:16.773 [2024-12-16 02:57:47.421593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:16.773 [2024-12-16 02:57:47.421599] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.034 [2024-12-16 02:57:47.432972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.034 [2024-12-16 02:57:47.433395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.034 [2024-12-16 02:57:47.433413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.034 [2024-12-16 02:57:47.433420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.034 [2024-12-16 02:57:47.433588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.034 [2024-12-16 02:57:47.433757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.034 [2024-12-16 02:57:47.433766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.034 [2024-12-16 02:57:47.433773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.034 [2024-12-16 02:57:47.433780] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.034 [2024-12-16 02:57:47.445760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.034 [2024-12-16 02:57:47.446102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.034 [2024-12-16 02:57:47.446120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.034 [2024-12-16 02:57:47.446127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.034 [2024-12-16 02:57:47.446286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.034 [2024-12-16 02:57:47.446444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.034 [2024-12-16 02:57:47.446454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.034 [2024-12-16 02:57:47.446459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.034 [2024-12-16 02:57:47.446466] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.034 [2024-12-16 02:57:47.458677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.034 [2024-12-16 02:57:47.459088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.034 [2024-12-16 02:57:47.459106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.034 [2024-12-16 02:57:47.459114] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.034 [2024-12-16 02:57:47.459273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.034 [2024-12-16 02:57:47.459433] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.034 [2024-12-16 02:57:47.459442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.034 [2024-12-16 02:57:47.459452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.034 [2024-12-16 02:57:47.459459] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.034 [2024-12-16 02:57:47.471512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.034 [2024-12-16 02:57:47.471844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.034 [2024-12-16 02:57:47.471867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.034 [2024-12-16 02:57:47.471875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.034 [2024-12-16 02:57:47.472034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.034 [2024-12-16 02:57:47.472212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.034 [2024-12-16 02:57:47.472221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.034 [2024-12-16 02:57:47.472228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.034 [2024-12-16 02:57:47.472234] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.034 [2024-12-16 02:57:47.484368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.034 [2024-12-16 02:57:47.484785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.034 [2024-12-16 02:57:47.484833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.034 [2024-12-16 02:57:47.484873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.034 [2024-12-16 02:57:47.485460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.034 [2024-12-16 02:57:47.485629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.034 [2024-12-16 02:57:47.485639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.034 [2024-12-16 02:57:47.485646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.034 [2024-12-16 02:57:47.485652] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.034 [2024-12-16 02:57:47.497264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.034 [2024-12-16 02:57:47.497543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.034 [2024-12-16 02:57:47.497561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.034 [2024-12-16 02:57:47.497567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.034 [2024-12-16 02:57:47.497728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.034 [2024-12-16 02:57:47.497910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.034 [2024-12-16 02:57:47.497921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.034 [2024-12-16 02:57:47.497928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.034 [2024-12-16 02:57:47.497934] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.034 [2024-12-16 02:57:47.510018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.034 [2024-12-16 02:57:47.510425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.034 [2024-12-16 02:57:47.510442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.034 [2024-12-16 02:57:47.510449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.034 [2024-12-16 02:57:47.510617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.034 [2024-12-16 02:57:47.510786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.034 [2024-12-16 02:57:47.510796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.034 [2024-12-16 02:57:47.510802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.034 [2024-12-16 02:57:47.510809] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.034 [2024-12-16 02:57:47.522995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.034 [2024-12-16 02:57:47.523286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.034 [2024-12-16 02:57:47.523303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.035 [2024-12-16 02:57:47.523311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.035 [2024-12-16 02:57:47.523480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.035 [2024-12-16 02:57:47.523648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.035 [2024-12-16 02:57:47.523658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.035 [2024-12-16 02:57:47.523667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.035 [2024-12-16 02:57:47.523675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.035 [2024-12-16 02:57:47.535998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.035 [2024-12-16 02:57:47.536371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.035 [2024-12-16 02:57:47.536424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.035 [2024-12-16 02:57:47.536453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.035 [2024-12-16 02:57:47.536980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.035 [2024-12-16 02:57:47.537156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.035 [2024-12-16 02:57:47.537165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.035 [2024-12-16 02:57:47.537172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.035 [2024-12-16 02:57:47.537179] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.035 [2024-12-16 02:57:47.548799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.035 [2024-12-16 02:57:47.549078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.035 [2024-12-16 02:57:47.549095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.035 [2024-12-16 02:57:47.549109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.035 [2024-12-16 02:57:47.549269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.035 [2024-12-16 02:57:47.549428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.035 [2024-12-16 02:57:47.549438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.035 [2024-12-16 02:57:47.549444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.035 [2024-12-16 02:57:47.549450] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.035 [2024-12-16 02:57:47.561649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.035 [2024-12-16 02:57:47.562129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.035 [2024-12-16 02:57:47.562176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.035 [2024-12-16 02:57:47.562200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.035 [2024-12-16 02:57:47.562712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.035 [2024-12-16 02:57:47.562878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.035 [2024-12-16 02:57:47.562889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.035 [2024-12-16 02:57:47.562895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.035 [2024-12-16 02:57:47.562902] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.035 [2024-12-16 02:57:47.574479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.035 [2024-12-16 02:57:47.574883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.035 [2024-12-16 02:57:47.574930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.035 [2024-12-16 02:57:47.574953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.035 [2024-12-16 02:57:47.575500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.035 [2024-12-16 02:57:47.575902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.035 [2024-12-16 02:57:47.575923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.035 [2024-12-16 02:57:47.575938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.035 [2024-12-16 02:57:47.575952] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.035 [2024-12-16 02:57:47.589292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.035 [2024-12-16 02:57:47.589825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.035 [2024-12-16 02:57:47.589856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.035 [2024-12-16 02:57:47.589868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.035 [2024-12-16 02:57:47.590121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.035 [2024-12-16 02:57:47.590381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.035 [2024-12-16 02:57:47.590394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.035 [2024-12-16 02:57:47.590404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.035 [2024-12-16 02:57:47.590414] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.035 [2024-12-16 02:57:47.602322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.035 [2024-12-16 02:57:47.602706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.035 [2024-12-16 02:57:47.602723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.035 [2024-12-16 02:57:47.602732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.035 [2024-12-16 02:57:47.602913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.035 [2024-12-16 02:57:47.603088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.035 [2024-12-16 02:57:47.603098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.035 [2024-12-16 02:57:47.603105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.035 [2024-12-16 02:57:47.603112] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.035 [2024-12-16 02:57:47.615130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.035 [2024-12-16 02:57:47.615465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.035 [2024-12-16 02:57:47.615483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.035 [2024-12-16 02:57:47.615490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.035 [2024-12-16 02:57:47.615658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.035 [2024-12-16 02:57:47.615826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.035 [2024-12-16 02:57:47.615835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.035 [2024-12-16 02:57:47.615842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.035 [2024-12-16 02:57:47.615856] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.035 [2024-12-16 02:57:47.628178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.035 [2024-12-16 02:57:47.628460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.035 [2024-12-16 02:57:47.628479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.035 [2024-12-16 02:57:47.628486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.035 [2024-12-16 02:57:47.628654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.035 [2024-12-16 02:57:47.628828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.035 [2024-12-16 02:57:47.628838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.035 [2024-12-16 02:57:47.628858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.035 [2024-12-16 02:57:47.628865] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.035 [2024-12-16 02:57:47.641190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.035 [2024-12-16 02:57:47.641646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.035 [2024-12-16 02:57:47.641664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.035 [2024-12-16 02:57:47.641671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.035 [2024-12-16 02:57:47.641839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.035 [2024-12-16 02:57:47.642016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.035 [2024-12-16 02:57:47.642026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.035 [2024-12-16 02:57:47.642033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.035 [2024-12-16 02:57:47.642039] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.035 [2024-12-16 02:57:47.654012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.035 [2024-12-16 02:57:47.654280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.035 [2024-12-16 02:57:47.654297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.035 [2024-12-16 02:57:47.654304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.036 [2024-12-16 02:57:47.654462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.036 [2024-12-16 02:57:47.654621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.036 [2024-12-16 02:57:47.654630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.036 [2024-12-16 02:57:47.654637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.036 [2024-12-16 02:57:47.654643] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.036 [2024-12-16 02:57:47.666753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.036 [2024-12-16 02:57:47.667081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.036 [2024-12-16 02:57:47.667098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.036 [2024-12-16 02:57:47.667106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.036 [2024-12-16 02:57:47.667264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.036 [2024-12-16 02:57:47.667422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.036 [2024-12-16 02:57:47.667432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.036 [2024-12-16 02:57:47.667438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.036 [2024-12-16 02:57:47.667444] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.036 [2024-12-16 02:57:47.679500] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.036 [2024-12-16 02:57:47.679859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.036 [2024-12-16 02:57:47.679875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.036 [2024-12-16 02:57:47.679883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.036 [2024-12-16 02:57:47.680042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.036 [2024-12-16 02:57:47.680201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.036 [2024-12-16 02:57:47.680210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.036 [2024-12-16 02:57:47.680216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.036 [2024-12-16 02:57:47.680222] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.297 [2024-12-16 02:57:47.692463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.297 [2024-12-16 02:57:47.692870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-12-16 02:57:47.692888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.297 [2024-12-16 02:57:47.692896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.297 [2024-12-16 02:57:47.693064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.297 [2024-12-16 02:57:47.693232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.297 [2024-12-16 02:57:47.693242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.297 [2024-12-16 02:57:47.693249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.297 [2024-12-16 02:57:47.693255] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.297 [2024-12-16 02:57:47.705190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.297 [2024-12-16 02:57:47.705557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-12-16 02:57:47.705574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.297 [2024-12-16 02:57:47.705581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.297 [2024-12-16 02:57:47.705741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.297 [2024-12-16 02:57:47.705907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.297 [2024-12-16 02:57:47.705916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.297 [2024-12-16 02:57:47.705923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.297 [2024-12-16 02:57:47.705929] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.297 [2024-12-16 02:57:47.718034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.297 [2024-12-16 02:57:47.718423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-12-16 02:57:47.718441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.297 [2024-12-16 02:57:47.718452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.297 [2024-12-16 02:57:47.718611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.297 [2024-12-16 02:57:47.718773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.297 [2024-12-16 02:57:47.718782] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.297 [2024-12-16 02:57:47.718789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.297 [2024-12-16 02:57:47.718796] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.297 [2024-12-16 02:57:47.730764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.297 [2024-12-16 02:57:47.731051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-12-16 02:57:47.731098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.297 [2024-12-16 02:57:47.731122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.297 [2024-12-16 02:57:47.731702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.297 [2024-12-16 02:57:47.732296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.297 [2024-12-16 02:57:47.732307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.297 [2024-12-16 02:57:47.732313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.297 [2024-12-16 02:57:47.732319] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.297 [2024-12-16 02:57:47.743607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.297 [2024-12-16 02:57:47.743959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-12-16 02:57:47.743977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.297 [2024-12-16 02:57:47.743985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.297 [2024-12-16 02:57:47.744144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.297 [2024-12-16 02:57:47.744304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.297 [2024-12-16 02:57:47.744312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.297 [2024-12-16 02:57:47.744319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.297 [2024-12-16 02:57:47.744325] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.297 [2024-12-16 02:57:47.756354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.297 [2024-12-16 02:57:47.756668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-12-16 02:57:47.756686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.297 [2024-12-16 02:57:47.756694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.297 [2024-12-16 02:57:47.756858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.297 [2024-12-16 02:57:47.757021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.297 [2024-12-16 02:57:47.757031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.297 [2024-12-16 02:57:47.757037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.297 [2024-12-16 02:57:47.757043] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.297 [2024-12-16 02:57:47.769138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.297 [2024-12-16 02:57:47.769479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-12-16 02:57:47.769496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.297 [2024-12-16 02:57:47.769503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.297 [2024-12-16 02:57:47.769661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.297 [2024-12-16 02:57:47.769820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.297 [2024-12-16 02:57:47.769830] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.297 [2024-12-16 02:57:47.769836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.297 [2024-12-16 02:57:47.769842] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.297 [2024-12-16 02:57:47.781980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.297 [2024-12-16 02:57:47.782296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-12-16 02:57:47.782313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.297 [2024-12-16 02:57:47.782320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.297 [2024-12-16 02:57:47.782478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.297 [2024-12-16 02:57:47.782637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.297 [2024-12-16 02:57:47.782646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.297 [2024-12-16 02:57:47.782653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.297 [2024-12-16 02:57:47.782659] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.297 [2024-12-16 02:57:47.794752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.297 [2024-12-16 02:57:47.795123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.297 [2024-12-16 02:57:47.795141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.297 [2024-12-16 02:57:47.795148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.297 [2024-12-16 02:57:47.795307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.297 [2024-12-16 02:57:47.795465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.298 [2024-12-16 02:57:47.795475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.298 [2024-12-16 02:57:47.795485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.298 [2024-12-16 02:57:47.795492] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.298 [2024-12-16 02:57:47.807587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.298 [2024-12-16 02:57:47.808012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-12-16 02:57:47.808059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.298 [2024-12-16 02:57:47.808083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.298 [2024-12-16 02:57:47.808644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.298 [2024-12-16 02:57:47.808805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.298 [2024-12-16 02:57:47.808814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.298 [2024-12-16 02:57:47.808820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.298 [2024-12-16 02:57:47.808826] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.298 [2024-12-16 02:57:47.820410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.298 [2024-12-16 02:57:47.820807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-12-16 02:57:47.820825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.298 [2024-12-16 02:57:47.820832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.298 [2024-12-16 02:57:47.821004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.298 [2024-12-16 02:57:47.821166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.298 [2024-12-16 02:57:47.821175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.298 [2024-12-16 02:57:47.821181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.298 [2024-12-16 02:57:47.821187] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.298 [2024-12-16 02:57:47.833138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.298 [2024-12-16 02:57:47.833455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-12-16 02:57:47.833472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.298 [2024-12-16 02:57:47.833480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.298 [2024-12-16 02:57:47.833639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.298 [2024-12-16 02:57:47.833798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.298 [2024-12-16 02:57:47.833808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.298 [2024-12-16 02:57:47.833814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.298 [2024-12-16 02:57:47.833821] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.298 [2024-12-16 02:57:47.845922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.298 [2024-12-16 02:57:47.846267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-12-16 02:57:47.846284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.298 [2024-12-16 02:57:47.846292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.298 [2024-12-16 02:57:47.846450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.298 [2024-12-16 02:57:47.846609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.298 [2024-12-16 02:57:47.846618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.298 [2024-12-16 02:57:47.846624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.298 [2024-12-16 02:57:47.846630] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.298 [2024-12-16 02:57:47.858668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.298 [2024-12-16 02:57:47.859006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-12-16 02:57:47.859023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.298 [2024-12-16 02:57:47.859030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.298 [2024-12-16 02:57:47.859188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.298 [2024-12-16 02:57:47.859348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.298 [2024-12-16 02:57:47.859357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.298 [2024-12-16 02:57:47.859363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.298 [2024-12-16 02:57:47.859369] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.298 [2024-12-16 02:57:47.871471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.298 [2024-12-16 02:57:47.871890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-12-16 02:57:47.871909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.298 [2024-12-16 02:57:47.871917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.298 [2024-12-16 02:57:47.872084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.298 [2024-12-16 02:57:47.872252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.298 [2024-12-16 02:57:47.872261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.298 [2024-12-16 02:57:47.872268] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.298 [2024-12-16 02:57:47.872274] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.298 [2024-12-16 02:57:47.884435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.298 [2024-12-16 02:57:47.884894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-12-16 02:57:47.884939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.298 [2024-12-16 02:57:47.884971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.298 [2024-12-16 02:57:47.885553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.298 [2024-12-16 02:57:47.885894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.298 [2024-12-16 02:57:47.885905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.298 [2024-12-16 02:57:47.885912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.298 [2024-12-16 02:57:47.885919] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.298 [2024-12-16 02:57:47.897412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.298 [2024-12-16 02:57:47.897789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-12-16 02:57:47.897806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.298 [2024-12-16 02:57:47.897815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.298 [2024-12-16 02:57:47.897986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.298 [2024-12-16 02:57:47.898156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.298 [2024-12-16 02:57:47.898165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.298 [2024-12-16 02:57:47.898171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.298 [2024-12-16 02:57:47.898178] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.298 [2024-12-16 02:57:47.910246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.298 [2024-12-16 02:57:47.910674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-12-16 02:57:47.910718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.298 [2024-12-16 02:57:47.910742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.298 [2024-12-16 02:57:47.911335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.298 [2024-12-16 02:57:47.911813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.298 [2024-12-16 02:57:47.911822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.298 [2024-12-16 02:57:47.911829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.298 [2024-12-16 02:57:47.911836] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.298 [2024-12-16 02:57:47.923062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.298 [2024-12-16 02:57:47.923398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.298 [2024-12-16 02:57:47.923415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.298 [2024-12-16 02:57:47.923422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.298 [2024-12-16 02:57:47.923581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.298 [2024-12-16 02:57:47.923744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.299 [2024-12-16 02:57:47.923753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.299 [2024-12-16 02:57:47.923759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.299 [2024-12-16 02:57:47.923766] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.299 [2024-12-16 02:57:47.935851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.299 [2024-12-16 02:57:47.936267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-12-16 02:57:47.936284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.299 [2024-12-16 02:57:47.936291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.299 [2024-12-16 02:57:47.936450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.299 [2024-12-16 02:57:47.936609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.299 [2024-12-16 02:57:47.936619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.299 [2024-12-16 02:57:47.936625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.299 [2024-12-16 02:57:47.936631] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.299 [2024-12-16 02:57:47.948869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.299 [2024-12-16 02:57:47.949282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.299 [2024-12-16 02:57:47.949299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.299 [2024-12-16 02:57:47.949307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.299 [2024-12-16 02:57:47.949474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.299 [2024-12-16 02:57:47.949642] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.299 [2024-12-16 02:57:47.949652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.299 [2024-12-16 02:57:47.949658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.299 [2024-12-16 02:57:47.949665] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.560 [2024-12-16 02:57:47.961781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.560 [2024-12-16 02:57:47.962208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.560 [2024-12-16 02:57:47.962226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.560 [2024-12-16 02:57:47.962249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.560 [2024-12-16 02:57:47.962802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.560 [2024-12-16 02:57:47.962970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.560 [2024-12-16 02:57:47.962979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.560 [2024-12-16 02:57:47.962989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.560 [2024-12-16 02:57:47.962995] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.560 [2024-12-16 02:57:47.974655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.560 [2024-12-16 02:57:47.974991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.560 [2024-12-16 02:57:47.975009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.560 [2024-12-16 02:57:47.975016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.560 [2024-12-16 02:57:47.975175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.560 [2024-12-16 02:57:47.975334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.560 [2024-12-16 02:57:47.975343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.560 [2024-12-16 02:57:47.975350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.560 [2024-12-16 02:57:47.975356] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.560 [2024-12-16 02:57:47.987433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.560 [2024-12-16 02:57:47.987779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.560 [2024-12-16 02:57:47.987825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.560 [2024-12-16 02:57:47.987863] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.560 [2024-12-16 02:57:47.988448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.560 [2024-12-16 02:57:47.988989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.560 [2024-12-16 02:57:47.988999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.560 [2024-12-16 02:57:47.989005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.560 [2024-12-16 02:57:47.989011] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.560 [2024-12-16 02:57:48.000197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.560 [2024-12-16 02:57:48.000590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.560 [2024-12-16 02:57:48.000634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.560 [2024-12-16 02:57:48.000657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.560 [2024-12-16 02:57:48.001147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.560 [2024-12-16 02:57:48.001309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.560 [2024-12-16 02:57:48.001317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.560 [2024-12-16 02:57:48.001323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.560 [2024-12-16 02:57:48.001328] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.560 [2024-12-16 02:57:48.012962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.560 [2024-12-16 02:57:48.013313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.560 [2024-12-16 02:57:48.013330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.560 [2024-12-16 02:57:48.013336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.560 [2024-12-16 02:57:48.013495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.560 [2024-12-16 02:57:48.013654] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.560 [2024-12-16 02:57:48.013663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.560 [2024-12-16 02:57:48.013670] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.560 [2024-12-16 02:57:48.013676] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.560 [2024-12-16 02:57:48.025766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.560 [2024-12-16 02:57:48.026180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.560 [2024-12-16 02:57:48.026221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.560 [2024-12-16 02:57:48.026247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.560 [2024-12-16 02:57:48.026795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.560 [2024-12-16 02:57:48.026960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.560 [2024-12-16 02:57:48.026969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.560 [2024-12-16 02:57:48.026975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.560 [2024-12-16 02:57:48.026980] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.560 [2024-12-16 02:57:48.038623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.560 [2024-12-16 02:57:48.038969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.560 [2024-12-16 02:57:48.038986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.560 [2024-12-16 02:57:48.038993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.560 [2024-12-16 02:57:48.039152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.560 [2024-12-16 02:57:48.039311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.560 [2024-12-16 02:57:48.039321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.560 [2024-12-16 02:57:48.039327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.560 [2024-12-16 02:57:48.039333] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.560 7330.50 IOPS, 28.63 MiB/s [2024-12-16T01:57:48.219Z] [2024-12-16 02:57:48.051394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.560 [2024-12-16 02:57:48.051812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.560 [2024-12-16 02:57:48.051873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.560 [2024-12-16 02:57:48.051907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.560 [2024-12-16 02:57:48.052490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.560 [2024-12-16 02:57:48.053086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.560 [2024-12-16 02:57:48.053113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.560 [2024-12-16 02:57:48.053133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.560 [2024-12-16 02:57:48.053153] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.560 [2024-12-16 02:57:48.064242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.560 [2024-12-16 02:57:48.064654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.560 [2024-12-16 02:57:48.064671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.560 [2024-12-16 02:57:48.064678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.561 [2024-12-16 02:57:48.064837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.561 [2024-12-16 02:57:48.065005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.561 [2024-12-16 02:57:48.065016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.561 [2024-12-16 02:57:48.065022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.561 [2024-12-16 02:57:48.065029] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.561 [2024-12-16 02:57:48.077101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.561 [2024-12-16 02:57:48.077487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.561 [2024-12-16 02:57:48.077504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.561 [2024-12-16 02:57:48.077511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.561 [2024-12-16 02:57:48.077670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.561 [2024-12-16 02:57:48.077829] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.561 [2024-12-16 02:57:48.077838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.561 [2024-12-16 02:57:48.077845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.561 [2024-12-16 02:57:48.077859] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.561 [2024-12-16 02:57:48.089933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.561 [2024-12-16 02:57:48.090344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.561 [2024-12-16 02:57:48.090389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.561 [2024-12-16 02:57:48.090413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.561 [2024-12-16 02:57:48.090927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.561 [2024-12-16 02:57:48.091091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.561 [2024-12-16 02:57:48.091099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.561 [2024-12-16 02:57:48.091105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.561 [2024-12-16 02:57:48.091111] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.561 [2024-12-16 02:57:48.102751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.561 [2024-12-16 02:57:48.103162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.561 [2024-12-16 02:57:48.103200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.561 [2024-12-16 02:57:48.103226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.561 [2024-12-16 02:57:48.103808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.561 [2024-12-16 02:57:48.104281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.561 [2024-12-16 02:57:48.104291] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.561 [2024-12-16 02:57:48.104297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.561 [2024-12-16 02:57:48.104303] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.561 [2024-12-16 02:57:48.115581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.561 [2024-12-16 02:57:48.115970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.561 [2024-12-16 02:57:48.115987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.561 [2024-12-16 02:57:48.115994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.561 [2024-12-16 02:57:48.116153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.561 [2024-12-16 02:57:48.116312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.561 [2024-12-16 02:57:48.116321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.561 [2024-12-16 02:57:48.116327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.561 [2024-12-16 02:57:48.116333] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.561 [2024-12-16 02:57:48.128505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.561 [2024-12-16 02:57:48.128895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.561 [2024-12-16 02:57:48.128912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.561 [2024-12-16 02:57:48.128920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.561 [2024-12-16 02:57:48.129087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.561 [2024-12-16 02:57:48.129254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.561 [2024-12-16 02:57:48.129264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.561 [2024-12-16 02:57:48.129274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.561 [2024-12-16 02:57:48.129282] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.561 [2024-12-16 02:57:48.141454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.561 [2024-12-16 02:57:48.141873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.561 [2024-12-16 02:57:48.141891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.561 [2024-12-16 02:57:48.141899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.561 [2024-12-16 02:57:48.142067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.561 [2024-12-16 02:57:48.142235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.561 [2024-12-16 02:57:48.142245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.561 [2024-12-16 02:57:48.142251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.561 [2024-12-16 02:57:48.142258] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.561 [2024-12-16 02:57:48.154393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.561 [2024-12-16 02:57:48.154814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.561 [2024-12-16 02:57:48.154832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.561 [2024-12-16 02:57:48.154839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.561 [2024-12-16 02:57:48.155013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.561 [2024-12-16 02:57:48.155182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.561 [2024-12-16 02:57:48.155191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.561 [2024-12-16 02:57:48.155198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.561 [2024-12-16 02:57:48.155205] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.561 [2024-12-16 02:57:48.167199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.561 [2024-12-16 02:57:48.167606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.561 [2024-12-16 02:57:48.167639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.561 [2024-12-16 02:57:48.167665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.561 [2024-12-16 02:57:48.168197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.561 [2024-12-16 02:57:48.168358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.561 [2024-12-16 02:57:48.168366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.561 [2024-12-16 02:57:48.168372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.561 [2024-12-16 02:57:48.168378] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.561 [2024-12-16 02:57:48.179953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.561 [2024-12-16 02:57:48.180376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.561 [2024-12-16 02:57:48.180422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.561 [2024-12-16 02:57:48.180445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.561 [2024-12-16 02:57:48.181042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.561 [2024-12-16 02:57:48.181466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.561 [2024-12-16 02:57:48.181475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.561 [2024-12-16 02:57:48.181481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.561 [2024-12-16 02:57:48.181487] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.561 [2024-12-16 02:57:48.192812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.562 [2024-12-16 02:57:48.193224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.562 [2024-12-16 02:57:48.193241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.562 [2024-12-16 02:57:48.193249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.562 [2024-12-16 02:57:48.193407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.562 [2024-12-16 02:57:48.193566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.562 [2024-12-16 02:57:48.193576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.562 [2024-12-16 02:57:48.193582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.562 [2024-12-16 02:57:48.193589] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.562 [2024-12-16 02:57:48.205701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.562 [2024-12-16 02:57:48.206141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.562 [2024-12-16 02:57:48.206187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.562 [2024-12-16 02:57:48.206211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.562 [2024-12-16 02:57:48.206792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.562 [2024-12-16 02:57:48.207393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.562 [2024-12-16 02:57:48.207421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.562 [2024-12-16 02:57:48.207442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.562 [2024-12-16 02:57:48.207461] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.822 [2024-12-16 02:57:48.218669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.822 [2024-12-16 02:57:48.219099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-12-16 02:57:48.219118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.822 [2024-12-16 02:57:48.219129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.822 [2024-12-16 02:57:48.219302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.822 [2024-12-16 02:57:48.219476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.822 [2024-12-16 02:57:48.219486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.822 [2024-12-16 02:57:48.219493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.822 [2024-12-16 02:57:48.219499] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.822 [2024-12-16 02:57:48.231545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.822 [2024-12-16 02:57:48.231970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.822 [2024-12-16 02:57:48.232018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.823 [2024-12-16 02:57:48.232041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.823 [2024-12-16 02:57:48.232531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.823 [2024-12-16 02:57:48.232691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.823 [2024-12-16 02:57:48.232699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.823 [2024-12-16 02:57:48.232705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.823 [2024-12-16 02:57:48.232710] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.823 [2024-12-16 02:57:48.244356] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.823 [2024-12-16 02:57:48.244702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-12-16 02:57:48.244719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.823 [2024-12-16 02:57:48.244726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.823 [2024-12-16 02:57:48.244892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.823 [2024-12-16 02:57:48.245052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.823 [2024-12-16 02:57:48.245061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.823 [2024-12-16 02:57:48.245068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.823 [2024-12-16 02:57:48.245074] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.823 [2024-12-16 02:57:48.257089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.823 [2024-12-16 02:57:48.257500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-12-16 02:57:48.257517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.823 [2024-12-16 02:57:48.257524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.823 [2024-12-16 02:57:48.257682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.823 [2024-12-16 02:57:48.257845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.823 [2024-12-16 02:57:48.257862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.823 [2024-12-16 02:57:48.257868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.823 [2024-12-16 02:57:48.257875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.823 [2024-12-16 02:57:48.269943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.823 [2024-12-16 02:57:48.270361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-12-16 02:57:48.270405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.823 [2024-12-16 02:57:48.270428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.823 [2024-12-16 02:57:48.270835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.823 [2024-12-16 02:57:48.271003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.823 [2024-12-16 02:57:48.271013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.823 [2024-12-16 02:57:48.271019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.823 [2024-12-16 02:57:48.271026] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.823 [2024-12-16 02:57:48.282790] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.823 [2024-12-16 02:57:48.283178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-12-16 02:57:48.283223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.823 [2024-12-16 02:57:48.283247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.823 [2024-12-16 02:57:48.283793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.823 [2024-12-16 02:57:48.284195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.823 [2024-12-16 02:57:48.284214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.823 [2024-12-16 02:57:48.284229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.823 [2024-12-16 02:57:48.284243] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.823 [2024-12-16 02:57:48.297624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.823 [2024-12-16 02:57:48.298158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-12-16 02:57:48.298203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.823 [2024-12-16 02:57:48.298226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.823 [2024-12-16 02:57:48.298749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.823 [2024-12-16 02:57:48.299013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.823 [2024-12-16 02:57:48.299027] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.823 [2024-12-16 02:57:48.299040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.823 [2024-12-16 02:57:48.299050] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.823 [2024-12-16 02:57:48.310574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.823 [2024-12-16 02:57:48.310990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-12-16 02:57:48.311008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.823 [2024-12-16 02:57:48.311016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.823 [2024-12-16 02:57:48.311183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.823 [2024-12-16 02:57:48.311350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.823 [2024-12-16 02:57:48.311360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.823 [2024-12-16 02:57:48.311366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.823 [2024-12-16 02:57:48.311373] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.823 [2024-12-16 02:57:48.323347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.823 [2024-12-16 02:57:48.323733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-12-16 02:57:48.323751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.823 [2024-12-16 02:57:48.323758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.823 [2024-12-16 02:57:48.323923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.823 [2024-12-16 02:57:48.324084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.823 [2024-12-16 02:57:48.324093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.823 [2024-12-16 02:57:48.324099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.823 [2024-12-16 02:57:48.324106] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.823 [2024-12-16 02:57:48.336285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.823 [2024-12-16 02:57:48.336704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-12-16 02:57:48.336750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.823 [2024-12-16 02:57:48.336773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.823 [2024-12-16 02:57:48.337371] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.823 [2024-12-16 02:57:48.337838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.823 [2024-12-16 02:57:48.337852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.823 [2024-12-16 02:57:48.337860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.823 [2024-12-16 02:57:48.337867] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.823 [2024-12-16 02:57:48.349048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.823 [2024-12-16 02:57:48.349457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-12-16 02:57:48.349474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.823 [2024-12-16 02:57:48.349481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.823 [2024-12-16 02:57:48.349640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.823 [2024-12-16 02:57:48.349799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.823 [2024-12-16 02:57:48.349808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.823 [2024-12-16 02:57:48.349814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.823 [2024-12-16 02:57:48.349820] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.823 [2024-12-16 02:57:48.361907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.823 [2024-12-16 02:57:48.362245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.823 [2024-12-16 02:57:48.362262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.824 [2024-12-16 02:57:48.362269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.824 [2024-12-16 02:57:48.362428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.824 [2024-12-16 02:57:48.362587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.824 [2024-12-16 02:57:48.362597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.824 [2024-12-16 02:57:48.362603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.824 [2024-12-16 02:57:48.362609] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.824 [2024-12-16 02:57:48.374731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.824 [2024-12-16 02:57:48.375149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-12-16 02:57:48.375193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.824 [2024-12-16 02:57:48.375216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.824 [2024-12-16 02:57:48.375642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.824 [2024-12-16 02:57:48.375803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.824 [2024-12-16 02:57:48.375811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.824 [2024-12-16 02:57:48.375818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.824 [2024-12-16 02:57:48.375824] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.824 [2024-12-16 02:57:48.387454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.824 [2024-12-16 02:57:48.387859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-12-16 02:57:48.387877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.824 [2024-12-16 02:57:48.387888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.824 [2024-12-16 02:57:48.388056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.824 [2024-12-16 02:57:48.388225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.824 [2024-12-16 02:57:48.388234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.824 [2024-12-16 02:57:48.388240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.824 [2024-12-16 02:57:48.388247] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.824 [2024-12-16 02:57:48.400381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.824 [2024-12-16 02:57:48.400805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-12-16 02:57:48.400821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.824 [2024-12-16 02:57:48.400829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.824 [2024-12-16 02:57:48.401005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.824 [2024-12-16 02:57:48.401175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.824 [2024-12-16 02:57:48.401184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.824 [2024-12-16 02:57:48.401190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.824 [2024-12-16 02:57:48.401197] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.824 [2024-12-16 02:57:48.413242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.824 [2024-12-16 02:57:48.413649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-12-16 02:57:48.413665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.824 [2024-12-16 02:57:48.413672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.824 [2024-12-16 02:57:48.413831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.824 [2024-12-16 02:57:48.414043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.824 [2024-12-16 02:57:48.414054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.824 [2024-12-16 02:57:48.414061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.824 [2024-12-16 02:57:48.414067] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.824 [2024-12-16 02:57:48.426064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.824 [2024-12-16 02:57:48.426488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-12-16 02:57:48.426534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.824 [2024-12-16 02:57:48.426558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.824 [2024-12-16 02:57:48.427054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.824 [2024-12-16 02:57:48.427219] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.824 [2024-12-16 02:57:48.427227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.824 [2024-12-16 02:57:48.427233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.824 [2024-12-16 02:57:48.427239] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.824 [2024-12-16 02:57:48.438815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.824 [2024-12-16 02:57:48.439213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-12-16 02:57:48.439258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.824 [2024-12-16 02:57:48.439281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.824 [2024-12-16 02:57:48.439713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.824 [2024-12-16 02:57:48.439880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.824 [2024-12-16 02:57:48.439890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.824 [2024-12-16 02:57:48.439897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.824 [2024-12-16 02:57:48.439904] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.824 [2024-12-16 02:57:48.451868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.824 [2024-12-16 02:57:48.452293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-12-16 02:57:48.452335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.824 [2024-12-16 02:57:48.452361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.824 [2024-12-16 02:57:48.452880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.824 [2024-12-16 02:57:48.453056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.824 [2024-12-16 02:57:48.453066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.824 [2024-12-16 02:57:48.453073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.824 [2024-12-16 02:57:48.453080] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.824 [2024-12-16 02:57:48.464926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.824 [2024-12-16 02:57:48.465357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-12-16 02:57:48.465375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.824 [2024-12-16 02:57:48.465382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.824 [2024-12-16 02:57:48.465555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.824 [2024-12-16 02:57:48.465728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.824 [2024-12-16 02:57:48.465737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.824 [2024-12-16 02:57:48.465747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.824 [2024-12-16 02:57:48.465755] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:17.824 [2024-12-16 02:57:48.477817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:17.824 [2024-12-16 02:57:48.478164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:17.824 [2024-12-16 02:57:48.478211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:17.824 [2024-12-16 02:57:48.478235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:17.824 [2024-12-16 02:57:48.478817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:17.824 [2024-12-16 02:57:48.478995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:17.824 [2024-12-16 02:57:48.479005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:17.824 [2024-12-16 02:57:48.479012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:17.824 [2024-12-16 02:57:48.479018] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.085 [2024-12-16 02:57:48.490698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.085 [2024-12-16 02:57:48.491115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.085 [2024-12-16 02:57:48.491158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.085 [2024-12-16 02:57:48.491183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.085 [2024-12-16 02:57:48.491764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.085 [2024-12-16 02:57:48.492098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.085 [2024-12-16 02:57:48.492108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.085 [2024-12-16 02:57:48.492114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.085 [2024-12-16 02:57:48.492121] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.085 [2024-12-16 02:57:48.503453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.085 [2024-12-16 02:57:48.503882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.085 [2024-12-16 02:57:48.503928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.085 [2024-12-16 02:57:48.503951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.085 [2024-12-16 02:57:48.504344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.085 [2024-12-16 02:57:48.504504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.085 [2024-12-16 02:57:48.504514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.085 [2024-12-16 02:57:48.504520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.085 [2024-12-16 02:57:48.504526] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.085 [2024-12-16 02:57:48.516519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.086 [2024-12-16 02:57:48.516918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.086 [2024-12-16 02:57:48.516938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.086 [2024-12-16 02:57:48.516962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.086 [2024-12-16 02:57:48.517132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.086 [2024-12-16 02:57:48.517301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.086 [2024-12-16 02:57:48.517311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.086 [2024-12-16 02:57:48.517318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.086 [2024-12-16 02:57:48.517325] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.086 [2024-12-16 02:57:48.529523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.086 [2024-12-16 02:57:48.529964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.086 [2024-12-16 02:57:48.529983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.086 [2024-12-16 02:57:48.529991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.086 [2024-12-16 02:57:48.530160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.086 [2024-12-16 02:57:48.530329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.086 [2024-12-16 02:57:48.530338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.086 [2024-12-16 02:57:48.530345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.086 [2024-12-16 02:57:48.530351] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.086 [2024-12-16 02:57:48.542425] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.086 [2024-12-16 02:57:48.542841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.086 [2024-12-16 02:57:48.542865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.086 [2024-12-16 02:57:48.542874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.086 [2024-12-16 02:57:48.543043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.086 [2024-12-16 02:57:48.543210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.086 [2024-12-16 02:57:48.543220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.086 [2024-12-16 02:57:48.543227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.086 [2024-12-16 02:57:48.543234] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.086 [2024-12-16 02:57:48.555285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.086 [2024-12-16 02:57:48.555695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.086 [2024-12-16 02:57:48.555712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.086 [2024-12-16 02:57:48.555723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.086 [2024-12-16 02:57:48.555887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.086 [2024-12-16 02:57:48.556048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.086 [2024-12-16 02:57:48.556057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.086 [2024-12-16 02:57:48.556064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.086 [2024-12-16 02:57:48.556071] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.086 [2024-12-16 02:57:48.568105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.086 [2024-12-16 02:57:48.568445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.086 [2024-12-16 02:57:48.568463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.086 [2024-12-16 02:57:48.568470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.086 [2024-12-16 02:57:48.568629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.086 [2024-12-16 02:57:48.568788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.086 [2024-12-16 02:57:48.568797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.086 [2024-12-16 02:57:48.568803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.086 [2024-12-16 02:57:48.568810] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.086 [2024-12-16 02:57:48.580879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.086 [2024-12-16 02:57:48.581223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.086 [2024-12-16 02:57:48.581240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.086 [2024-12-16 02:57:48.581247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.086 [2024-12-16 02:57:48.581406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.086 [2024-12-16 02:57:48.581565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.086 [2024-12-16 02:57:48.581575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.086 [2024-12-16 02:57:48.581581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.086 [2024-12-16 02:57:48.581588] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.086 [2024-12-16 02:57:48.593698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.086 [2024-12-16 02:57:48.594095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.086 [2024-12-16 02:57:48.594112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.086 [2024-12-16 02:57:48.594119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.086 [2024-12-16 02:57:48.594277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.086 [2024-12-16 02:57:48.594441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.086 [2024-12-16 02:57:48.594449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.086 [2024-12-16 02:57:48.594455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.086 [2024-12-16 02:57:48.594461] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.086 [2024-12-16 02:57:48.606566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.086 [2024-12-16 02:57:48.606922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.086 [2024-12-16 02:57:48.606969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.086 [2024-12-16 02:57:48.606992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.086 [2024-12-16 02:57:48.607473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.086 [2024-12-16 02:57:48.607635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.086 [2024-12-16 02:57:48.607644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.086 [2024-12-16 02:57:48.607651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.086 [2024-12-16 02:57:48.607657] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.086 [2024-12-16 02:57:48.619489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.086 [2024-12-16 02:57:48.619865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.086 [2024-12-16 02:57:48.619911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.086 [2024-12-16 02:57:48.619935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.086 [2024-12-16 02:57:48.620438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.086 [2024-12-16 02:57:48.620598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.086 [2024-12-16 02:57:48.620607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.086 [2024-12-16 02:57:48.620613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.086 [2024-12-16 02:57:48.620620] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.086 [2024-12-16 02:57:48.632280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.086 [2024-12-16 02:57:48.632672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.086 [2024-12-16 02:57:48.632689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.086 [2024-12-16 02:57:48.632697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.087 [2024-12-16 02:57:48.632864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.087 [2024-12-16 02:57:48.633024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.087 [2024-12-16 02:57:48.633033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.087 [2024-12-16 02:57:48.633043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.087 [2024-12-16 02:57:48.633050] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.087 [2024-12-16 02:57:48.645134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.087 [2024-12-16 02:57:48.645547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.087 [2024-12-16 02:57:48.645566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.087 [2024-12-16 02:57:48.645573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.087 [2024-12-16 02:57:48.645732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.087 [2024-12-16 02:57:48.645916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.087 [2024-12-16 02:57:48.645925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.087 [2024-12-16 02:57:48.645932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.087 [2024-12-16 02:57:48.645938] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.087 [2024-12-16 02:57:48.658078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.087 [2024-12-16 02:57:48.658480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.087 [2024-12-16 02:57:48.658498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.087 [2024-12-16 02:57:48.658505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.087 [2024-12-16 02:57:48.658673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.087 [2024-12-16 02:57:48.658841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.087 [2024-12-16 02:57:48.658857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.087 [2024-12-16 02:57:48.658864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.087 [2024-12-16 02:57:48.658871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.087 [2024-12-16 02:57:48.670869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.087 [2024-12-16 02:57:48.671306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.087 [2024-12-16 02:57:48.671322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.087 [2024-12-16 02:57:48.671330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.087 [2024-12-16 02:57:48.671489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.087 [2024-12-16 02:57:48.671649] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.087 [2024-12-16 02:57:48.671657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.087 [2024-12-16 02:57:48.671663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.087 [2024-12-16 02:57:48.671669] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.087 [2024-12-16 02:57:48.683705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.087 [2024-12-16 02:57:48.684102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.087 [2024-12-16 02:57:48.684119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.087 [2024-12-16 02:57:48.684127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.087 [2024-12-16 02:57:48.684286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.087 [2024-12-16 02:57:48.684444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.087 [2024-12-16 02:57:48.684454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.087 [2024-12-16 02:57:48.684460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.087 [2024-12-16 02:57:48.684466] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.087 [2024-12-16 02:57:48.696556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.087 [2024-12-16 02:57:48.696963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.087 [2024-12-16 02:57:48.696981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.087 [2024-12-16 02:57:48.696988] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.087 [2024-12-16 02:57:48.697147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.087 [2024-12-16 02:57:48.697306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.087 [2024-12-16 02:57:48.697315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.087 [2024-12-16 02:57:48.697322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.087 [2024-12-16 02:57:48.697329] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.087 [2024-12-16 02:57:48.709424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.087 [2024-12-16 02:57:48.709813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.087 [2024-12-16 02:57:48.709830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.087 [2024-12-16 02:57:48.709837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.087 [2024-12-16 02:57:48.710004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.087 [2024-12-16 02:57:48.710164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.087 [2024-12-16 02:57:48.710173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.087 [2024-12-16 02:57:48.710179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.087 [2024-12-16 02:57:48.710186] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.087 [2024-12-16 02:57:48.722274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.087 [2024-12-16 02:57:48.722629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.087 [2024-12-16 02:57:48.722646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.087 [2024-12-16 02:57:48.722656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.087 [2024-12-16 02:57:48.722814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.087 [2024-12-16 02:57:48.722982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.087 [2024-12-16 02:57:48.722991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.087 [2024-12-16 02:57:48.722998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.087 [2024-12-16 02:57:48.723004] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.087 [2024-12-16 02:57:48.735107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.087 [2024-12-16 02:57:48.735451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.087 [2024-12-16 02:57:48.735496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.087 [2024-12-16 02:57:48.735520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.087 [2024-12-16 02:57:48.736116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.087 [2024-12-16 02:57:48.736696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.087 [2024-12-16 02:57:48.736706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.087 [2024-12-16 02:57:48.736712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.087 [2024-12-16 02:57:48.736718] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.347 [2024-12-16 02:57:48.747990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.347 [2024-12-16 02:57:48.748407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.347 [2024-12-16 02:57:48.748425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.347 [2024-12-16 02:57:48.748433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.347 [2024-12-16 02:57:48.748601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.347 [2024-12-16 02:57:48.748770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.347 [2024-12-16 02:57:48.748779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.347 [2024-12-16 02:57:48.748786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.347 [2024-12-16 02:57:48.748792] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.347 [2024-12-16 02:57:48.760818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.347 [2024-12-16 02:57:48.761078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.347 [2024-12-16 02:57:48.761096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.347 [2024-12-16 02:57:48.761103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.347 [2024-12-16 02:57:48.761261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.347 [2024-12-16 02:57:48.761424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.347 [2024-12-16 02:57:48.761434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.347 [2024-12-16 02:57:48.761439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.347 [2024-12-16 02:57:48.761446] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.347 [2024-12-16 02:57:48.773540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.347 [2024-12-16 02:57:48.773960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.347 [2024-12-16 02:57:48.774017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.347 [2024-12-16 02:57:48.774041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.347 [2024-12-16 02:57:48.774624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.347 [2024-12-16 02:57:48.774854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.347 [2024-12-16 02:57:48.774865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.347 [2024-12-16 02:57:48.774872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.347 [2024-12-16 02:57:48.774878] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.347 [2024-12-16 02:57:48.786399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.347 [2024-12-16 02:57:48.786737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.347 [2024-12-16 02:57:48.786754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.347 [2024-12-16 02:57:48.786762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.347 [2024-12-16 02:57:48.786927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.348 [2024-12-16 02:57:48.787087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-16 02:57:48.787096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-16 02:57:48.787103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-16 02:57:48.787109] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-16 02:57:48.799195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-16 02:57:48.799589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-16 02:57:48.799606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-16 02:57:48.799613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.348 [2024-12-16 02:57:48.799772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.348 [2024-12-16 02:57:48.799939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-16 02:57:48.799949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-16 02:57:48.799963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-16 02:57:48.799970] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-16 02:57:48.812046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-16 02:57:48.812454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-16 02:57:48.812471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-16 02:57:48.812478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.348 [2024-12-16 02:57:48.812637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.348 [2024-12-16 02:57:48.812797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-16 02:57:48.812806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-16 02:57:48.812812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-16 02:57:48.812818] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-16 02:57:48.824853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-16 02:57:48.825244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-16 02:57:48.825261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-16 02:57:48.825269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.348 [2024-12-16 02:57:48.825427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.348 [2024-12-16 02:57:48.825586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-16 02:57:48.825596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-16 02:57:48.825602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-16 02:57:48.825609] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-16 02:57:48.837704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-16 02:57:48.838116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-16 02:57:48.838134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-16 02:57:48.838142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.348 [2024-12-16 02:57:48.838300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.348 [2024-12-16 02:57:48.838460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-16 02:57:48.838469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-16 02:57:48.838474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-16 02:57:48.838480] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-16 02:57:48.850561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-16 02:57:48.850971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-16 02:57:48.851024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-16 02:57:48.851048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.348 [2024-12-16 02:57:48.851628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.348 [2024-12-16 02:57:48.852147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-16 02:57:48.852157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-16 02:57:48.852163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-16 02:57:48.852169] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-16 02:57:48.863296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-16 02:57:48.863714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-16 02:57:48.863767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-16 02:57:48.863790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.348 [2024-12-16 02:57:48.864386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.348 [2024-12-16 02:57:48.864982] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-16 02:57:48.865010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-16 02:57:48.865031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-16 02:57:48.865051] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-16 02:57:48.876115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-16 02:57:48.876418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-16 02:57:48.876435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-16 02:57:48.876442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.348 [2024-12-16 02:57:48.876601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.348 [2024-12-16 02:57:48.876760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-16 02:57:48.876769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-16 02:57:48.876776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-16 02:57:48.876782] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-16 02:57:48.888951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-16 02:57:48.889300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-16 02:57:48.889316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-16 02:57:48.889326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.348 [2024-12-16 02:57:48.889486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.348 [2024-12-16 02:57:48.889645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-16 02:57:48.889653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-16 02:57:48.889659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-16 02:57:48.889666] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.348 [2024-12-16 02:57:48.901761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.348 [2024-12-16 02:57:48.902171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.348 [2024-12-16 02:57:48.902188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.348 [2024-12-16 02:57:48.902195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.348 [2024-12-16 02:57:48.902363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.348 [2024-12-16 02:57:48.902530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.348 [2024-12-16 02:57:48.902539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.348 [2024-12-16 02:57:48.902545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.348 [2024-12-16 02:57:48.902552] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-16 02:57:48.914887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-16 02:57:48.915352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-16 02:57:48.915397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-16 02:57:48.915421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.349 [2024-12-16 02:57:48.915920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.349 [2024-12-16 02:57:48.916090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-16 02:57:48.916100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-16 02:57:48.916107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-16 02:57:48.916113] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-16 02:57:48.927805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-16 02:57:48.928162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-16 02:57:48.928181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-16 02:57:48.928190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.349 [2024-12-16 02:57:48.928358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.349 [2024-12-16 02:57:48.928530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-16 02:57:48.928540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-16 02:57:48.928546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-16 02:57:48.928553] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-16 02:57:48.940552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-16 02:57:48.940949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-16 02:57:48.940995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-16 02:57:48.941019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.349 [2024-12-16 02:57:48.941237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.349 [2024-12-16 02:57:48.941399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-16 02:57:48.941410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-16 02:57:48.941415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-16 02:57:48.941422] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-16 02:57:48.953541] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-16 02:57:48.953943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-16 02:57:48.953962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-16 02:57:48.953970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.349 [2024-12-16 02:57:48.954142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.349 [2024-12-16 02:57:48.954319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-16 02:57:48.954331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-16 02:57:48.954338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-16 02:57:48.954345] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-16 02:57:48.966535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-16 02:57:48.966826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-16 02:57:48.966885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-16 02:57:48.966910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.349 [2024-12-16 02:57:48.967406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.349 [2024-12-16 02:57:48.967576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-16 02:57:48.967586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-16 02:57:48.967596] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-16 02:57:48.967604] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-16 02:57:48.979311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-16 02:57:48.979674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-16 02:57:48.979692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-16 02:57:48.979699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.349 [2024-12-16 02:57:48.979863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.349 [2024-12-16 02:57:48.980023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-16 02:57:48.980033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-16 02:57:48.980039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-16 02:57:48.980046] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-16 02:57:48.992076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.349 [2024-12-16 02:57:48.992403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.349 [2024-12-16 02:57:48.992420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.349 [2024-12-16 02:57:48.992427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.349 [2024-12-16 02:57:48.992585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.349 [2024-12-16 02:57:48.992745] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.349 [2024-12-16 02:57:48.992754] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.349 [2024-12-16 02:57:48.992760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.349 [2024-12-16 02:57:48.992767] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.349 [2024-12-16 02:57:49.005007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.610 [2024-12-16 02:57:49.005383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.610 [2024-12-16 02:57:49.005400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.610 [2024-12-16 02:57:49.005411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.610 [2024-12-16 02:57:49.005579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.610 [2024-12-16 02:57:49.005749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.610 [2024-12-16 02:57:49.005758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.610 [2024-12-16 02:57:49.005765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.610 [2024-12-16 02:57:49.005772] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.610 [2024-12-16 02:57:49.017991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.610 [2024-12-16 02:57:49.018382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.610 [2024-12-16 02:57:49.018399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.610 [2024-12-16 02:57:49.018407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.610 [2024-12-16 02:57:49.018579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.610 [2024-12-16 02:57:49.018752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.610 [2024-12-16 02:57:49.018761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.610 [2024-12-16 02:57:49.018768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.610 [2024-12-16 02:57:49.018775] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.610 [2024-12-16 02:57:49.030965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.610 [2024-12-16 02:57:49.031385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.610 [2024-12-16 02:57:49.031403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.610 [2024-12-16 02:57:49.031411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.610 [2024-12-16 02:57:49.031584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.610 [2024-12-16 02:57:49.031757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.610 [2024-12-16 02:57:49.031767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.610 [2024-12-16 02:57:49.031774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.610 [2024-12-16 02:57:49.031780] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.610 5864.40 IOPS, 22.91 MiB/s [2024-12-16T01:57:49.269Z] [2024-12-16 02:57:49.045259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.610 [2024-12-16 02:57:49.045578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.610 [2024-12-16 02:57:49.045596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.610 [2024-12-16 02:57:49.045604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.610 [2024-12-16 02:57:49.045777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.610 [2024-12-16 02:57:49.045958] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.610 [2024-12-16 02:57:49.045969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.610 [2024-12-16 02:57:49.045975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.610 [2024-12-16 02:57:49.045982] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.610 [2024-12-16 02:57:49.058274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.610 [2024-12-16 02:57:49.058598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.610 [2024-12-16 02:57:49.058616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.610 [2024-12-16 02:57:49.058627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.610 [2024-12-16 02:57:49.058801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.610 [2024-12-16 02:57:49.058981] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.610 [2024-12-16 02:57:49.058991] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.610 [2024-12-16 02:57:49.058998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.610 [2024-12-16 02:57:49.059005] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.610 [2024-12-16 02:57:49.071377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.610 [2024-12-16 02:57:49.071784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.610 [2024-12-16 02:57:49.071802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.610 [2024-12-16 02:57:49.071810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.610 [2024-12-16 02:57:49.071989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.610 [2024-12-16 02:57:49.072163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.610 [2024-12-16 02:57:49.072173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.610 [2024-12-16 02:57:49.072180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.610 [2024-12-16 02:57:49.072187] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.610 [2024-12-16 02:57:49.084470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.610 [2024-12-16 02:57:49.084907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.610 [2024-12-16 02:57:49.084926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.610 [2024-12-16 02:57:49.084934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.610 [2024-12-16 02:57:49.085123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.610 [2024-12-16 02:57:49.085298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.610 [2024-12-16 02:57:49.085308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.610 [2024-12-16 02:57:49.085314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.610 [2024-12-16 02:57:49.085321] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.610 [2024-12-16 02:57:49.097748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.610 [2024-12-16 02:57:49.098187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.610 [2024-12-16 02:57:49.098206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.610 [2024-12-16 02:57:49.098214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.610 [2024-12-16 02:57:49.098398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.610 [2024-12-16 02:57:49.098586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-16 02:57:49.098596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-16 02:57:49.098603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-16 02:57:49.098610] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-16 02:57:49.111141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-16 02:57:49.111592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-16 02:57:49.111611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-16 02:57:49.111620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.611 [2024-12-16 02:57:49.111816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.611 [2024-12-16 02:57:49.112020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-16 02:57:49.112031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-16 02:57:49.112038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-16 02:57:49.112046] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-16 02:57:49.124388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-16 02:57:49.124778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-16 02:57:49.124797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-16 02:57:49.124806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.611 [2024-12-16 02:57:49.125009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.611 [2024-12-16 02:57:49.125206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-16 02:57:49.125217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-16 02:57:49.125225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-16 02:57:49.125232] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-16 02:57:49.137677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-16 02:57:49.138015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-16 02:57:49.138034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-16 02:57:49.138043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.611 [2024-12-16 02:57:49.138226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.611 [2024-12-16 02:57:49.138410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-16 02:57:49.138420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-16 02:57:49.138431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-16 02:57:49.138439] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-16 02:57:49.150735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-16 02:57:49.151151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-16 02:57:49.151170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-16 02:57:49.151178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.611 [2024-12-16 02:57:49.151351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.611 [2024-12-16 02:57:49.151524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-16 02:57:49.151533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-16 02:57:49.151540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-16 02:57:49.151547] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-16 02:57:49.163725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-16 02:57:49.164146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-16 02:57:49.164165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-16 02:57:49.164172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.611 [2024-12-16 02:57:49.164345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.611 [2024-12-16 02:57:49.164518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-16 02:57:49.164527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-16 02:57:49.164534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-16 02:57:49.164542] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-16 02:57:49.176903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-16 02:57:49.177304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-16 02:57:49.177339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-16 02:57:49.177348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.611 [2024-12-16 02:57:49.177531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.611 [2024-12-16 02:57:49.177716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-16 02:57:49.177726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-16 02:57:49.177733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-16 02:57:49.177740] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-16 02:57:49.189938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-16 02:57:49.190369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-16 02:57:49.190387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-16 02:57:49.190395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.611 [2024-12-16 02:57:49.190568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.611 [2024-12-16 02:57:49.190743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-16 02:57:49.190753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-16 02:57:49.190760] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-16 02:57:49.190768] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-16 02:57:49.203324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-16 02:57:49.203740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-16 02:57:49.203759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-16 02:57:49.203766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.611 [2024-12-16 02:57:49.203956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.611 [2024-12-16 02:57:49.204141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-16 02:57:49.204152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-16 02:57:49.204159] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-16 02:57:49.204166] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-16 02:57:49.216321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-16 02:57:49.216725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-16 02:57:49.216743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.611 [2024-12-16 02:57:49.216751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.611 [2024-12-16 02:57:49.216930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.611 [2024-12-16 02:57:49.217104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.611 [2024-12-16 02:57:49.217114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.611 [2024-12-16 02:57:49.217120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.611 [2024-12-16 02:57:49.217127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.611 [2024-12-16 02:57:49.229456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.611 [2024-12-16 02:57:49.229896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.611 [2024-12-16 02:57:49.229915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.612 [2024-12-16 02:57:49.229926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.612 [2024-12-16 02:57:49.230117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.612 [2024-12-16 02:57:49.230292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.612 [2024-12-16 02:57:49.230301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.612 [2024-12-16 02:57:49.230308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.612 [2024-12-16 02:57:49.230315] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.612 [2024-12-16 02:57:49.242717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.612 [2024-12-16 02:57:49.243183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.612 [2024-12-16 02:57:49.243202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.612 [2024-12-16 02:57:49.243210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.612 [2024-12-16 02:57:49.243394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.612 [2024-12-16 02:57:49.243578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.612 [2024-12-16 02:57:49.243588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.612 [2024-12-16 02:57:49.243595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.612 [2024-12-16 02:57:49.243602] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.612 [2024-12-16 02:57:49.255814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.612 [2024-12-16 02:57:49.256198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.612 [2024-12-16 02:57:49.256216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.612 [2024-12-16 02:57:49.256224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.612 [2024-12-16 02:57:49.256396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.612 [2024-12-16 02:57:49.256569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.612 [2024-12-16 02:57:49.256578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.612 [2024-12-16 02:57:49.256585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.612 [2024-12-16 02:57:49.256592] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.872 [2024-12-16 02:57:49.268830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.872 [2024-12-16 02:57:49.269216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.872 [2024-12-16 02:57:49.269234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.872 [2024-12-16 02:57:49.269242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.872 [2024-12-16 02:57:49.269415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.872 [2024-12-16 02:57:49.269592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.872 [2024-12-16 02:57:49.269602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.872 [2024-12-16 02:57:49.269608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.872 [2024-12-16 02:57:49.269615] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.872 [2024-12-16 02:57:49.282042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.872 [2024-12-16 02:57:49.282455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.872 [2024-12-16 02:57:49.282474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.872 [2024-12-16 02:57:49.282482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.872 [2024-12-16 02:57:49.282665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.872 [2024-12-16 02:57:49.282855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.872 [2024-12-16 02:57:49.282866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.872 [2024-12-16 02:57:49.282874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.872 [2024-12-16 02:57:49.282882] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.872 [2024-12-16 02:57:49.295350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.872 [2024-12-16 02:57:49.295786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.872 [2024-12-16 02:57:49.295806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.872 [2024-12-16 02:57:49.295814] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.872 [2024-12-16 02:57:49.296003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.872 [2024-12-16 02:57:49.296186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.872 [2024-12-16 02:57:49.296196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.872 [2024-12-16 02:57:49.296203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.872 [2024-12-16 02:57:49.296210] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.872 [2024-12-16 02:57:49.308660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.872 [2024-12-16 02:57:49.309111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.872 [2024-12-16 02:57:49.309130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.872 [2024-12-16 02:57:49.309138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.872 [2024-12-16 02:57:49.309322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.872 [2024-12-16 02:57:49.309507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.872 [2024-12-16 02:57:49.309517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.872 [2024-12-16 02:57:49.309527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.872 [2024-12-16 02:57:49.309534] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.872 [2024-12-16 02:57:49.321739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.872 [2024-12-16 02:57:49.322167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.872 [2024-12-16 02:57:49.322186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.872 [2024-12-16 02:57:49.322193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.872 [2024-12-16 02:57:49.322366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.872 [2024-12-16 02:57:49.322541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.872 [2024-12-16 02:57:49.322550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.872 [2024-12-16 02:57:49.322557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.872 [2024-12-16 02:57:49.322564] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.872 [2024-12-16 02:57:49.334754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.872 [2024-12-16 02:57:49.335178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.873 [2024-12-16 02:57:49.335197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.873 [2024-12-16 02:57:49.335205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.873 [2024-12-16 02:57:49.335377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.873 [2024-12-16 02:57:49.335550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.873 [2024-12-16 02:57:49.335560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.873 [2024-12-16 02:57:49.335566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.873 [2024-12-16 02:57:49.335573] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.873 [2024-12-16 02:57:49.347764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.873 [2024-12-16 02:57:49.348091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.873 [2024-12-16 02:57:49.348109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.873 [2024-12-16 02:57:49.348117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.873 [2024-12-16 02:57:49.348289] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.873 [2024-12-16 02:57:49.348463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.873 [2024-12-16 02:57:49.348472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.873 [2024-12-16 02:57:49.348478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.873 [2024-12-16 02:57:49.348486] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.873 [2024-12-16 02:57:49.360747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.873 [2024-12-16 02:57:49.361145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.873 [2024-12-16 02:57:49.361163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.873 [2024-12-16 02:57:49.361171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.873 [2024-12-16 02:57:49.361338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.873 [2024-12-16 02:57:49.361506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.873 [2024-12-16 02:57:49.361516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.873 [2024-12-16 02:57:49.361522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.873 [2024-12-16 02:57:49.361528] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.873 [2024-12-16 02:57:49.373596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.873 [2024-12-16 02:57:49.374023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.873 [2024-12-16 02:57:49.374069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.873 [2024-12-16 02:57:49.374093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.873 [2024-12-16 02:57:49.374674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.873 [2024-12-16 02:57:49.375320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.873 [2024-12-16 02:57:49.375330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.873 [2024-12-16 02:57:49.375337] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.873 [2024-12-16 02:57:49.375343] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.873 [2024-12-16 02:57:49.386374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.873 [2024-12-16 02:57:49.386711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.873 [2024-12-16 02:57:49.386728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.873 [2024-12-16 02:57:49.386736] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.873 [2024-12-16 02:57:49.386900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.873 [2024-12-16 02:57:49.387059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.873 [2024-12-16 02:57:49.387068] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.873 [2024-12-16 02:57:49.387074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.873 [2024-12-16 02:57:49.387081] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.873 [2024-12-16 02:57:49.399162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.873 [2024-12-16 02:57:49.399592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.873 [2024-12-16 02:57:49.399637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.873 [2024-12-16 02:57:49.399669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.873 [2024-12-16 02:57:49.400115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.873 [2024-12-16 02:57:49.400277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.873 [2024-12-16 02:57:49.400286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.873 [2024-12-16 02:57:49.400292] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.873 [2024-12-16 02:57:49.400299] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.873 [2024-12-16 02:57:49.411929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.873 [2024-12-16 02:57:49.412319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.873 [2024-12-16 02:57:49.412336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.873 [2024-12-16 02:57:49.412343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.873 [2024-12-16 02:57:49.412502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.873 [2024-12-16 02:57:49.412661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.873 [2024-12-16 02:57:49.412670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.873 [2024-12-16 02:57:49.412676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.873 [2024-12-16 02:57:49.412682] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.873 [2024-12-16 02:57:49.424715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.873 [2024-12-16 02:57:49.425153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.873 [2024-12-16 02:57:49.425198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.873 [2024-12-16 02:57:49.425221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.873 [2024-12-16 02:57:49.425804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.873 [2024-12-16 02:57:49.426405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.873 [2024-12-16 02:57:49.426444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.873 [2024-12-16 02:57:49.426451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.873 [2024-12-16 02:57:49.426459] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.873 [2024-12-16 02:57:49.440046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.873 [2024-12-16 02:57:49.440497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.873 [2024-12-16 02:57:49.440542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.873 [2024-12-16 02:57:49.440565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.873 [2024-12-16 02:57:49.441159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.873 [2024-12-16 02:57:49.441448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.873 [2024-12-16 02:57:49.441461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.873 [2024-12-16 02:57:49.441471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.873 [2024-12-16 02:57:49.441480] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.873 [2024-12-16 02:57:49.452991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.873 [2024-12-16 02:57:49.453409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.873 [2024-12-16 02:57:49.453427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.873 [2024-12-16 02:57:49.453435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.873 [2024-12-16 02:57:49.453603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.873 [2024-12-16 02:57:49.453771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.873 [2024-12-16 02:57:49.453781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.873 [2024-12-16 02:57:49.453787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.873 [2024-12-16 02:57:49.453793] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.873 [2024-12-16 02:57:49.465762] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.873 [2024-12-16 02:57:49.466182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.873 [2024-12-16 02:57:49.466227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.873 [2024-12-16 02:57:49.466250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.874 [2024-12-16 02:57:49.466704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.874 [2024-12-16 02:57:49.466870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.874 [2024-12-16 02:57:49.466880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.874 [2024-12-16 02:57:49.466887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.874 [2024-12-16 02:57:49.466893] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.874 [2024-12-16 02:57:49.478616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.874 [2024-12-16 02:57:49.478959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.874 [2024-12-16 02:57:49.479008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.874 [2024-12-16 02:57:49.479032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.874 [2024-12-16 02:57:49.479564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.874 [2024-12-16 02:57:49.479724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.874 [2024-12-16 02:57:49.479733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.874 [2024-12-16 02:57:49.479743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.874 [2024-12-16 02:57:49.479749] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.874 [2024-12-16 02:57:49.491385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.874 [2024-12-16 02:57:49.491785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.874 [2024-12-16 02:57:49.491802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.874 [2024-12-16 02:57:49.491809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.874 [2024-12-16 02:57:49.491973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.874 [2024-12-16 02:57:49.492134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.874 [2024-12-16 02:57:49.492144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.874 [2024-12-16 02:57:49.492150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.874 [2024-12-16 02:57:49.492156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.874 [2024-12-16 02:57:49.504238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.874 [2024-12-16 02:57:49.504656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.874 [2024-12-16 02:57:49.504673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.874 [2024-12-16 02:57:49.504681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.874 [2024-12-16 02:57:49.504840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.874 [2024-12-16 02:57:49.505005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.874 [2024-12-16 02:57:49.505015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.874 [2024-12-16 02:57:49.505021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.874 [2024-12-16 02:57:49.505027] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:18.874 [2024-12-16 02:57:49.517244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:18.874 [2024-12-16 02:57:49.517660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:18.874 [2024-12-16 02:57:49.517679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:18.874 [2024-12-16 02:57:49.517686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:18.874 [2024-12-16 02:57:49.517852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:18.874 [2024-12-16 02:57:49.518013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:18.874 [2024-12-16 02:57:49.518023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:18.874 [2024-12-16 02:57:49.518029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:18.874 [2024-12-16 02:57:49.518036] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.134 [2024-12-16 02:57:49.530305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.134 [2024-12-16 02:57:49.530642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.134 [2024-12-16 02:57:49.530660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.134 [2024-12-16 02:57:49.530668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.134 [2024-12-16 02:57:49.530841] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.134 [2024-12-16 02:57:49.531059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.134 [2024-12-16 02:57:49.531069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.134 [2024-12-16 02:57:49.531076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.134 [2024-12-16 02:57:49.531082] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.134 [2024-12-16 02:57:49.543187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.134 [2024-12-16 02:57:49.543597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.134 [2024-12-16 02:57:49.543615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.134 [2024-12-16 02:57:49.543622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.134 [2024-12-16 02:57:49.543781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.134 [2024-12-16 02:57:49.543946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.134 [2024-12-16 02:57:49.543955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.134 [2024-12-16 02:57:49.543961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.134 [2024-12-16 02:57:49.543967] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.134 [2024-12-16 02:57:49.556007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.134 [2024-12-16 02:57:49.556396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.134 [2024-12-16 02:57:49.556431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.134 [2024-12-16 02:57:49.556457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.134 [2024-12-16 02:57:49.556992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.134 [2024-12-16 02:57:49.557154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.134 [2024-12-16 02:57:49.557163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.134 [2024-12-16 02:57:49.557169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.134 [2024-12-16 02:57:49.557176] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.134 [2024-12-16 02:57:49.568804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.135 [2024-12-16 02:57:49.569196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.135 [2024-12-16 02:57:49.569213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.135 [2024-12-16 02:57:49.569225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.135 [2024-12-16 02:57:49.569384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.135 [2024-12-16 02:57:49.569543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.135 [2024-12-16 02:57:49.569553] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.135 [2024-12-16 02:57:49.569559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.135 [2024-12-16 02:57:49.569565] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.135 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1203656 Killed "${NVMF_APP[@]}" "$@" 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.135 [2024-12-16 02:57:49.581862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1205011 00:36:19.135 [2024-12-16 02:57:49.582285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.135 [2024-12-16 02:57:49.582303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.135 [2024-12-16 02:57:49.582311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.135 [2024-12-16 02:57:49.582484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1205011 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:19.135 [2024-12-16 02:57:49.582658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.135 [2024-12-16 02:57:49.582667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.135 [2024-12-16 02:57:49.582674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.135 [2024-12-16 02:57:49.582681] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1205011 ']' 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:19.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:19.135 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.135 [2024-12-16 02:57:49.594865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.135 [2024-12-16 02:57:49.595226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.135 [2024-12-16 02:57:49.595248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.135 [2024-12-16 02:57:49.595256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.135 [2024-12-16 02:57:49.595429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.135 [2024-12-16 02:57:49.595601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.135 [2024-12-16 02:57:49.595611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.135 [2024-12-16 02:57:49.595618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.135 [2024-12-16 02:57:49.595624] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.135 [2024-12-16 02:57:49.607979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.135 [2024-12-16 02:57:49.608329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.135 [2024-12-16 02:57:49.608348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.135 [2024-12-16 02:57:49.608356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.135 [2024-12-16 02:57:49.608529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.135 [2024-12-16 02:57:49.608704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.135 [2024-12-16 02:57:49.608714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.135 [2024-12-16 02:57:49.608721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.135 [2024-12-16 02:57:49.608728] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.135 [2024-12-16 02:57:49.621079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.135 [2024-12-16 02:57:49.621427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.135 [2024-12-16 02:57:49.621445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.135 [2024-12-16 02:57:49.621454] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.135 [2024-12-16 02:57:49.621627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.135 [2024-12-16 02:57:49.621799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.135 [2024-12-16 02:57:49.621809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.135 [2024-12-16 02:57:49.621816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.135 [2024-12-16 02:57:49.621823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.135 [2024-12-16 02:57:49.631964] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:19.135 [2024-12-16 02:57:49.632006] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:19.135 [2024-12-16 02:57:49.634086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.135 [2024-12-16 02:57:49.634425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.135 [2024-12-16 02:57:49.634443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.135 [2024-12-16 02:57:49.634451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.135 [2024-12-16 02:57:49.634620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.135 [2024-12-16 02:57:49.634790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.135 [2024-12-16 02:57:49.634800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.135 [2024-12-16 02:57:49.634808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.135 [2024-12-16 02:57:49.634816] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.135 [2024-12-16 02:57:49.647050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.135 [2024-12-16 02:57:49.647481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.135 [2024-12-16 02:57:49.647499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.135 [2024-12-16 02:57:49.647507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.135 [2024-12-16 02:57:49.647675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.135 [2024-12-16 02:57:49.647844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.135 [2024-12-16 02:57:49.647861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.135 [2024-12-16 02:57:49.647868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.135 [2024-12-16 02:57:49.647875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.135 [2024-12-16 02:57:49.659939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.135 [2024-12-16 02:57:49.660270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.135 [2024-12-16 02:57:49.660289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.135 [2024-12-16 02:57:49.660297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.135 [2024-12-16 02:57:49.660465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.135 [2024-12-16 02:57:49.660634] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.135 [2024-12-16 02:57:49.660644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.135 [2024-12-16 02:57:49.660650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.135 [2024-12-16 02:57:49.660657] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.135 [2024-12-16 02:57:49.673014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.136 [2024-12-16 02:57:49.673440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.136 [2024-12-16 02:57:49.673457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.136 [2024-12-16 02:57:49.673465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.136 [2024-12-16 02:57:49.673642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.136 [2024-12-16 02:57:49.673816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.136 [2024-12-16 02:57:49.673826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.136 [2024-12-16 02:57:49.673833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.136 [2024-12-16 02:57:49.673839] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.136 [2024-12-16 02:57:49.686020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.136 [2024-12-16 02:57:49.686459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.136 [2024-12-16 02:57:49.686477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.136 [2024-12-16 02:57:49.686485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.136 [2024-12-16 02:57:49.686658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.136 [2024-12-16 02:57:49.686831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.136 [2024-12-16 02:57:49.686840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.136 [2024-12-16 02:57:49.686852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.136 [2024-12-16 02:57:49.686861] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.136 [2024-12-16 02:57:49.699100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.136 [2024-12-16 02:57:49.699501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.136 [2024-12-16 02:57:49.699519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.136 [2024-12-16 02:57:49.699527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.136 [2024-12-16 02:57:49.699701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.136 [2024-12-16 02:57:49.699880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.136 [2024-12-16 02:57:49.699890] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.136 [2024-12-16 02:57:49.699897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.136 [2024-12-16 02:57:49.699905] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.136 [2024-12-16 02:57:49.712105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.136 [2024-12-16 02:57:49.712439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.136 [2024-12-16 02:57:49.712457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.136 [2024-12-16 02:57:49.712465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.136 [2024-12-16 02:57:49.712638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.136 [2024-12-16 02:57:49.712813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.136 [2024-12-16 02:57:49.712826] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.136 [2024-12-16 02:57:49.712833] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.136 [2024-12-16 02:57:49.712841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.136 [2024-12-16 02:57:49.713326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:19.136 [2024-12-16 02:57:49.725104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.136 [2024-12-16 02:57:49.725550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.136 [2024-12-16 02:57:49.725572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.136 [2024-12-16 02:57:49.725580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.136 [2024-12-16 02:57:49.725750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.136 [2024-12-16 02:57:49.725926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.136 [2024-12-16 02:57:49.725937] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.136 [2024-12-16 02:57:49.725944] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.136 [2024-12-16 02:57:49.725951] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.136 [2024-12-16 02:57:49.735019] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:19.136 [2024-12-16 02:57:49.735046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:19.136 [2024-12-16 02:57:49.735053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:19.136 [2024-12-16 02:57:49.735059] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:19.136 [2024-12-16 02:57:49.735064] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:19.136 [2024-12-16 02:57:49.736311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:19.136 [2024-12-16 02:57:49.736424] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:19.136 [2024-12-16 02:57:49.736425] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:19.136 [2024-12-16 02:57:49.738185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.136 [2024-12-16 02:57:49.738634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.136 [2024-12-16 02:57:49.738655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.136 [2024-12-16 02:57:49.738664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.136 [2024-12-16 02:57:49.738840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.136 [2024-12-16 02:57:49.739023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.136 [2024-12-16 02:57:49.739033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.136 [2024-12-16 02:57:49.739040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.136 [2024-12-16 02:57:49.739048] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.136 [2024-12-16 02:57:49.751243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.136 [2024-12-16 02:57:49.751705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.136 [2024-12-16 02:57:49.751727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.136 [2024-12-16 02:57:49.751737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.136 [2024-12-16 02:57:49.751919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.136 [2024-12-16 02:57:49.752094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.136 [2024-12-16 02:57:49.752104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.136 [2024-12-16 02:57:49.752111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.136 [2024-12-16 02:57:49.752119] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.136 [2024-12-16 02:57:49.764319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.136 [2024-12-16 02:57:49.764753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.136 [2024-12-16 02:57:49.764777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.136 [2024-12-16 02:57:49.764787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.136 [2024-12-16 02:57:49.764968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.136 [2024-12-16 02:57:49.765144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.136 [2024-12-16 02:57:49.765154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.136 [2024-12-16 02:57:49.765161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.136 [2024-12-16 02:57:49.765170] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.136 [2024-12-16 02:57:49.777369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.136 [2024-12-16 02:57:49.777681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.136 [2024-12-16 02:57:49.777704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.136 [2024-12-16 02:57:49.777713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.136 [2024-12-16 02:57:49.777893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.136 [2024-12-16 02:57:49.778069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.136 [2024-12-16 02:57:49.778079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.136 [2024-12-16 02:57:49.778087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.136 [2024-12-16 02:57:49.778094] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.136 [2024-12-16 02:57:49.790447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.136 [2024-12-16 02:57:49.790829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.136 [2024-12-16 02:57:49.790857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.136 [2024-12-16 02:57:49.790873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.137 [2024-12-16 02:57:49.791049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.137 [2024-12-16 02:57:49.791224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.137 [2024-12-16 02:57:49.791234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.137 [2024-12-16 02:57:49.791241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.137 [2024-12-16 02:57:49.791249] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.396 [2024-12-16 02:57:49.803443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.396 [2024-12-16 02:57:49.803784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.396 [2024-12-16 02:57:49.803803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.396 [2024-12-16 02:57:49.803812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.396 [2024-12-16 02:57:49.803991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.396 [2024-12-16 02:57:49.804166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.396 [2024-12-16 02:57:49.804176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.396 [2024-12-16 02:57:49.804184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.396 [2024-12-16 02:57:49.804191] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.396 [2024-12-16 02:57:49.816530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.396 [2024-12-16 02:57:49.816939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.396 [2024-12-16 02:57:49.816959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.396 [2024-12-16 02:57:49.816967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.396 [2024-12-16 02:57:49.817140] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.397 [2024-12-16 02:57:49.817314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.397 [2024-12-16 02:57:49.817324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.397 [2024-12-16 02:57:49.817330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.397 [2024-12-16 02:57:49.817338] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.397 [2024-12-16 02:57:49.829529] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.397 [2024-12-16 02:57:49.829959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.397 [2024-12-16 02:57:49.829983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.397 [2024-12-16 02:57:49.829991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.397 [2024-12-16 02:57:49.830164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.397 [2024-12-16 02:57:49.830340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.397 [2024-12-16 02:57:49.830351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.397 [2024-12-16 02:57:49.830358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.397 [2024-12-16 02:57:49.830364] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.397 [2024-12-16 02:57:49.842561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.397 [2024-12-16 02:57:49.842915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.397 [2024-12-16 02:57:49.842934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.397 [2024-12-16 02:57:49.842942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.397 [2024-12-16 02:57:49.843114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.397 [2024-12-16 02:57:49.843293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.397 [2024-12-16 02:57:49.843303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.397 [2024-12-16 02:57:49.843310] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.397 [2024-12-16 02:57:49.843317] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.397 [2024-12-16 02:57:49.855660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.397 [2024-12-16 02:57:49.855951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.397 [2024-12-16 02:57:49.855969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.397 [2024-12-16 02:57:49.855977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.397 [2024-12-16 02:57:49.856149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.397 [2024-12-16 02:57:49.856321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.397 [2024-12-16 02:57:49.856331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.397 [2024-12-16 02:57:49.856338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.397 [2024-12-16 02:57:49.856345] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.397 [2024-12-16 02:57:49.866859] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:19.397 [2024-12-16 02:57:49.868691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.397 [2024-12-16 02:57:49.869016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.397 [2024-12-16 02:57:49.869033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.397 [2024-12-16 02:57:49.869041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.397 [2024-12-16 02:57:49.869213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.397 [2024-12-16 02:57:49.869388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.397 [2024-12-16 02:57:49.869397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.397 [2024-12-16 02:57:49.869404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.397 [2024-12-16 02:57:49.869411] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.397 [2024-12-16 02:57:49.881760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.397 [2024-12-16 02:57:49.882147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.397 [2024-12-16 02:57:49.882165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.397 [2024-12-16 02:57:49.882173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.397 [2024-12-16 02:57:49.882347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.397 [2024-12-16 02:57:49.882520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.397 [2024-12-16 02:57:49.882529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.397 [2024-12-16 02:57:49.882536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.397 [2024-12-16 02:57:49.882543] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.397 [2024-12-16 02:57:49.894715] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.397 [2024-12-16 02:57:49.895118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.397 [2024-12-16 02:57:49.895136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.397 [2024-12-16 02:57:49.895144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.397 [2024-12-16 02:57:49.895316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.397 [2024-12-16 02:57:49.895489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.397 [2024-12-16 02:57:49.895499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.397 [2024-12-16 02:57:49.895505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.397 [2024-12-16 02:57:49.895512] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.397 Malloc0 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.397 [2024-12-16 02:57:49.907705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.397 [2024-12-16 02:57:49.908117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.397 [2024-12-16 02:57:49.908135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.397 [2024-12-16 02:57:49.908143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.397 [2024-12-16 02:57:49.908315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.397 [2024-12-16 02:57:49.908490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.397 [2024-12-16 02:57:49.908499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.397 [2024-12-16 02:57:49.908506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.397 [2024-12-16 02:57:49.908513] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.397 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.397 [2024-12-16 02:57:49.920677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.397 [2024-12-16 02:57:49.921035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:19.397 [2024-12-16 02:57:49.921053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2387490 with addr=10.0.0.2, port=4420 00:36:19.397 [2024-12-16 02:57:49.921062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2387490 is same with the state(6) to be set 00:36:19.397 [2024-12-16 02:57:49.921234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2387490 (9): Bad file descriptor 00:36:19.397 [2024-12-16 02:57:49.921408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:19.397 [2024-12-16 02:57:49.921418] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:19.398 [2024-12-16 02:57:49.921424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:19.398 [2024-12-16 02:57:49.921431] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:19.398 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.398 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:19.398 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.398 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.398 [2024-12-16 02:57:49.929553] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:19.398 [2024-12-16 02:57:49.933773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:19.398 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.398 02:57:49 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1204066 00:36:19.398 [2024-12-16 02:57:49.956286] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:36:20.775 5043.50 IOPS, 19.70 MiB/s [2024-12-16T01:57:52.371Z] 5987.14 IOPS, 23.39 MiB/s [2024-12-16T01:57:53.308Z] 6691.75 IOPS, 26.14 MiB/s [2024-12-16T01:57:54.245Z] 7226.11 IOPS, 28.23 MiB/s [2024-12-16T01:57:55.181Z] 7640.50 IOPS, 29.85 MiB/s [2024-12-16T01:57:56.117Z] 8009.00 IOPS, 31.29 MiB/s [2024-12-16T01:57:57.494Z] 8310.50 IOPS, 32.46 MiB/s [2024-12-16T01:57:58.431Z] 8573.77 IOPS, 33.49 MiB/s [2024-12-16T01:57:59.367Z] 8785.43 IOPS, 34.32 MiB/s 00:36:28.708 Latency(us) 00:36:28.708 [2024-12-16T01:57:59.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:28.708 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:28.708 Verification LBA range: start 0x0 length 0x4000 00:36:28.708 Nvme1n1 : 15.01 8968.35 35.03 10910.76 0.00 6419.16 651.46 19473.55 00:36:28.708 [2024-12-16T01:57:59.367Z] =================================================================================================================== 00:36:28.708 [2024-12-16T01:57:59.367Z] Total : 8968.35 35.03 10910.76 0.00 6419.16 651.46 19473.55 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:28.708 rmmod nvme_tcp 00:36:28.708 rmmod nvme_fabrics 00:36:28.708 rmmod nvme_keyring 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1205011 ']' 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1205011 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1205011 ']' 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1205011 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1205011 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:28.708 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1205011' 00:36:28.708 killing process with pid 1205011 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1205011 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1205011 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:28.968 02:57:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:31.504 00:36:31.504 real 0m26.030s 00:36:31.504 user 1m1.093s 00:36:31.504 sys 0m6.662s 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:31.504 ************************************ 00:36:31.504 END TEST nvmf_bdevperf 00:36:31.504 ************************************ 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:31.504 ************************************ 00:36:31.504 START TEST nvmf_target_disconnect 00:36:31.504 ************************************ 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:31.504 * Looking for test storage... 00:36:31.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:31.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.504 --rc genhtml_branch_coverage=1 00:36:31.504 --rc genhtml_function_coverage=1 00:36:31.504 --rc genhtml_legend=1 00:36:31.504 --rc geninfo_all_blocks=1 00:36:31.504 --rc geninfo_unexecuted_blocks=1 00:36:31.504 00:36:31.504 ' 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:31.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.504 --rc genhtml_branch_coverage=1 00:36:31.504 --rc genhtml_function_coverage=1 00:36:31.504 --rc genhtml_legend=1 00:36:31.504 --rc geninfo_all_blocks=1 00:36:31.504 --rc geninfo_unexecuted_blocks=1 00:36:31.504 00:36:31.504 ' 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:31.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.504 --rc genhtml_branch_coverage=1 00:36:31.504 --rc genhtml_function_coverage=1 00:36:31.504 --rc genhtml_legend=1 00:36:31.504 --rc geninfo_all_blocks=1 00:36:31.504 --rc geninfo_unexecuted_blocks=1 00:36:31.504 00:36:31.504 ' 00:36:31.504 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:31.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:31.504 --rc genhtml_branch_coverage=1 00:36:31.504 --rc genhtml_function_coverage=1 00:36:31.504 --rc genhtml_legend=1 00:36:31.504 --rc geninfo_all_blocks=1 00:36:31.505 --rc geninfo_unexecuted_blocks=1 00:36:31.505 00:36:31.505 ' 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:31.505 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:31.505 02:58:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:38.076 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:38.076 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:38.076 Found net devices under 0000:af:00.0: cvl_0_0 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:38.076 Found net devices under 0000:af:00.1: cvl_0_1 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:38.076 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:38.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:38.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.334 ms 00:36:38.077 00:36:38.077 --- 10.0.0.2 ping statistics --- 00:36:38.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.077 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:38.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:38.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:36:38.077 00:36:38.077 --- 10.0.0.1 ping statistics --- 00:36:38.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.077 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:38.077 ************************************ 00:36:38.077 START TEST nvmf_target_disconnect_tc1 00:36:38.077 ************************************ 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:38.077 [2024-12-16 02:58:07.977002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:38.077 [2024-12-16 02:58:07.977046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a32c50 with addr=10.0.0.2, port=4420 00:36:38.077 [2024-12-16 02:58:07.977070] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:38.077 [2024-12-16 02:58:07.977082] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:38.077 [2024-12-16 02:58:07.977088] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:38.077 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:38.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:38.077 Initializing NVMe Controllers 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:38.077 00:36:38.077 real 0m0.120s 00:36:38.077 user 0m0.051s 00:36:38.077 sys 0m0.067s 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:38.077 02:58:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:38.077 ************************************ 00:36:38.077 END TEST nvmf_target_disconnect_tc1 00:36:38.077 ************************************ 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:38.077 ************************************ 00:36:38.077 START TEST nvmf_target_disconnect_tc2 00:36:38.077 ************************************ 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1210077 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1210077 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1210077 ']' 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:38.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:38.077 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:38.077 [2024-12-16 02:58:08.114699] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:38.077 [2024-12-16 02:58:08.114737] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:38.077 [2024-12-16 02:58:08.191774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:38.077 [2024-12-16 02:58:08.214117] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:38.077 [2024-12-16 02:58:08.214156] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:38.077 [2024-12-16 02:58:08.214163] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:38.077 [2024-12-16 02:58:08.214169] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:38.077 [2024-12-16 02:58:08.214174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:38.077 [2024-12-16 02:58:08.215661] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:38.078 [2024-12-16 02:58:08.215774] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:38.078 [2024-12-16 02:58:08.215920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:38.078 [2024-12-16 02:58:08.215922] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:38.078 Malloc0 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:38.078 [2024-12-16 02:58:08.379164] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:38.078 [2024-12-16 02:58:08.408219] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1210106 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:38.078 02:58:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:39.996 02:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1210077 00:36:39.996 02:58:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 [2024-12-16 02:58:10.440054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Write completed with error (sct=0, sc=8) 00:36:39.996 starting I/O failed 00:36:39.996 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 [2024-12-16 02:58:10.440262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Read completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 Write completed with error (sct=0, sc=8) 00:36:39.997 starting I/O failed 00:36:39.997 [2024-12-16 02:58:10.440454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:39.997 [2024-12-16 02:58:10.440718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.440741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.440981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.440993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.441111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.441122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.441277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.441292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.441448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.441460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.441598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.441610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.441856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.441868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.441957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.441967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.442112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.442124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.442303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.442336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.442452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.442485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.442621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.442654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.442908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.442942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.443084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.443120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.443310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.443343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.443598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.443632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.443896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.443932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.444135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.444172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.444310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.444343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.444555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.444593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.444734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.997 [2024-12-16 02:58:10.444747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.997 qpair failed and we were unable to recover it. 00:36:39.997 [2024-12-16 02:58:10.444916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.444951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.446096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.446123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.446364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.446377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.446468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.446479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.446702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.446735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.446953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.446986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.447133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.447166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.447308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.447340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.447614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.447646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.447843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.447887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.448059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.448093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.448336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.448368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.448549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.448581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.448771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.448804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.449029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.449063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.449306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.449339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.449622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.449655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.449935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.449970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.450222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.450254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.450513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.450544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.450736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.450769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.450943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.450977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.451147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.451188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.451379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.451412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.451684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.451717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.451955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.451989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.452250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.452283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.452405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.452438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.452706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.452738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.452922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.452957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.453153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.453185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.453374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.453407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.453618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.453650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.453837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.453880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.454094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.454127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.454270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.454302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.454501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.454535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.454770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.454802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.455002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.998 [2024-12-16 02:58:10.455037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.998 qpair failed and we were unable to recover it. 00:36:39.998 [2024-12-16 02:58:10.455234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.455268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.455386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.455419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.455534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.455566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.455761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.455793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.456004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.456039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.456215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.456246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.456392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.456424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.456686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.456718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.457023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.457057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.457254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.457286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.457516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.457550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.457680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.457711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.457992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.458026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.458166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.458198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.458382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.458414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.458644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.458677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.458961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.458994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.459202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.459235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.459425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.459457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.459719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.459753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.459930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.459963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.460205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.460237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.460507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.460539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.460720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.460759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.460963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.460999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.461190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.461222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.461357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.461390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.461601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.461633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.461861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.461895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.462132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.462164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.462354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.462388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.462576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.462608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.462798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.462831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.462991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.463025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.463297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.463330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.463623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.463656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.463884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.463919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.464194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.464227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.464415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.464447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:39.999 qpair failed and we were unable to recover it. 00:36:39.999 [2024-12-16 02:58:10.464643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:39.999 [2024-12-16 02:58:10.464676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.464892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.464930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.465118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.465151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.465268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.465301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.466774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.466831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.467136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.467173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.467394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.467428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.467560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.467594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.467840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.467884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.468148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.468180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.468400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.468434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.468705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.468740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.468881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.468917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.469122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.469155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.469275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.469307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.469454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.469488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.469749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.469781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.470014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.470049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.470316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.470350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.470565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.470598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.470835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.470878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.471018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.471050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.471187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.471220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.471361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.471394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.471509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.471547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.471667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.471701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.471895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.471930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.472115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.472147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.472265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.472298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.472512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.472544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.472689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.472723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.472916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.472953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.473094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.473127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.473299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.473332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.473464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.473497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.473681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.473715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.473859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.473892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.474075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.474108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.474257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.000 [2024-12-16 02:58:10.474290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.000 qpair failed and we were unable to recover it. 00:36:40.000 [2024-12-16 02:58:10.474406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.474439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.474630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.474664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.474883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.474917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.475160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.475192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.475374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.475408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.475664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.475696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.475882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.475917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.476178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.476211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.476478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.476511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.476698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.476733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.476941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.476975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.477175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.477208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.477358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.477392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.477695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.477727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.477869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.477903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.478089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.478122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.478363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.478396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.478577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.478610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.478794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.478827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.479016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.479051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.479243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.479277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.479526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.479561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.479741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.479774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.480016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.480051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.480210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.480242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.480428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.480468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.480603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.480637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.480885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.480919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.481109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.001 [2024-12-16 02:58:10.481142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.001 qpair failed and we were unable to recover it. 00:36:40.001 [2024-12-16 02:58:10.481383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.481416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.481671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.481704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.481944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.481979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.482241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.482273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.482537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.482570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.482764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.482797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.482992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.483026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.483228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.483261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.483453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.483485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.483750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.483782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.483937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.483972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.484237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.484270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.484464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.484499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.484692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.484725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.484988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.485022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.485305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.485339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.485472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.485505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.485692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.485725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.485986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.486021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.486231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.486263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.486509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.486541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.486723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.486755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.486937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.486971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.487127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.487160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.487360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.487394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.487644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.487676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.487870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.487911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.488105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.488138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.488399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.488433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.488673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.488707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.488976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.489011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.489197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.489230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.489433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.489469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.489652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.489685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.489969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.490004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.490178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.490213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.490403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.490442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.490629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.490663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.002 [2024-12-16 02:58:10.490864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.002 [2024-12-16 02:58:10.490898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.002 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.491138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.491171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.491312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.491345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.491528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.491561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.491760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.491793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.492061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.492095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.492336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.492368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.492506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.492540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.492730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.492762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.492952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.492986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.493136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.493169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.493367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.493399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.493523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.493557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.493836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.493880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.494066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.494099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.494292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.494324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.494543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.494576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.494856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.494890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.495081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.495114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.495359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.495392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.495612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.495646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.495833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.495875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.496007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.496041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.496290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.496323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.496508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.496541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.496730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.496764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.496998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.497033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.497273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.497306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.497509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.497541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.497676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.497709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.497891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.497925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.498069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.498101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.498409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.498442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.003 [2024-12-16 02:58:10.498618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.003 [2024-12-16 02:58:10.498650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.003 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.498840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.498881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.499030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.499062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.499192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.499225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.499347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.499380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.499668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.499706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.499890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.499925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.500109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.500142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.500277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.500309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.500436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.500469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.500733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.500766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.500969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.501003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.501153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.501186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.501385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.501418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.501698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.501730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.501861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.501897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.502080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.502113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.502304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.502337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.502589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.502621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.502821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.502862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.503057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.503090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.503335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.503367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.503503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.503535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.503719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.503752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.503995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.504030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.504155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.504188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.504319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.504353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.504675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.504708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.504964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.504999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.505147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.505179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.505371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.505403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.505531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.505565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.505755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.505788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.506026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.506061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.506186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.506219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.508063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.508122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.508314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.508348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.508500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.508532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.508658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.508689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.004 [2024-12-16 02:58:10.508889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.004 [2024-12-16 02:58:10.508925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.004 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.509122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.509154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.509349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.509382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.509693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.509725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.509964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.510003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.510212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.510246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.510384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.510425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.510694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.510746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.510951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.510986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.511159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.511192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.511303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.511335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.511461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.511493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.511734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.511768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.511914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.511949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.512080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.512113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.512315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.512348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.512581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.512615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.512888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.512923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.513045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.513078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.513231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.513265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.513587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.513621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.513815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.513858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.514001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.514034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.514209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.514242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.514376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.514408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.514604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.514637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.514866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.514900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.515144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.515177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.515323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.515357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.515479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.515510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.515718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.515752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.516002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.516037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.516233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.516266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.516404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.005 [2024-12-16 02:58:10.516438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.005 qpair failed and we were unable to recover it. 00:36:40.005 [2024-12-16 02:58:10.516583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.516616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.516866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.516901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.517077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.517109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.517243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.517277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.517563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.517596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.517790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.517823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.518025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.518060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.518244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.518277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.518459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.518500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.518710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.518743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.518927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.518961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.519157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.519190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.519322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.519360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.519546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.519580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.519758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.519791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.520004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.520040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.520163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.520195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.520313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.520346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.520497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.520530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.520730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.520762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.520898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.520933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.521062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.521095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.521243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.521275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.521407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.521440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.521566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.521598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.521785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.521819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.521977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.522012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.522136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.522168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.522313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.522345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.522458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.522491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.522677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.522711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.522865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.522899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.523082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.523114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.523295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.523328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.523460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.523494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.523629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.523663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.523797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.523831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.524045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.524080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.524212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.006 [2024-12-16 02:58:10.524245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.006 qpair failed and we were unable to recover it. 00:36:40.006 [2024-12-16 02:58:10.524436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.524470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.524656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.524689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.524825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.524895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.525018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.525051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.525169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.525202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.525332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.525365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.525539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.525573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.525687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.525720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.525843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.525889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.526121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.526154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.526372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.526405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.526597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.526630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.526809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.526842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.526980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.527021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.527212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.527245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.527443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.527475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.527589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.527622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.527794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.527826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.528017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.528051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.528318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.528351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.528489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.528522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.528701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.528734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.528863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.528898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.529017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.529050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.529292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.529325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.529457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.529491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.529604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.529637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.529826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.529871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.530085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.530119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.530307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.530339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.530453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.530486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.530660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.530693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.530900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.530935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.531113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.531147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.531278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.531311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.531506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.531539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.531673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.531706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.531886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.531920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.007 [2024-12-16 02:58:10.532190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.007 [2024-12-16 02:58:10.532224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.007 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.532363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.532395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.532570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.532647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.532845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.532900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.533081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.533115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.533246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.533280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.533417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.533450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.533636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.533669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.533878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.533915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.534036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.534069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.534294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.534327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.534520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.534554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.534730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.534764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.534892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.534926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.535105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.535138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.535312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.535346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.535482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.535516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.535708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.535741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.535858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.535894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.536011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.536044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.536221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.536256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.536428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.536461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.536643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.536676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.536867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.536903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.537025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.537059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.537172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.537206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.537325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.537358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.537498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.537531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.537641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.537674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.537867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.537916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.538027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.538062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.538238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.538271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.538476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.538509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.538713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.538746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.538872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.538907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.539141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.539175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.539291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.539323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.539435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.539469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.539593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.539627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.539741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.008 [2024-12-16 02:58:10.539774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.008 qpair failed and we were unable to recover it. 00:36:40.008 [2024-12-16 02:58:10.539897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.539932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.540066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.540100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.540368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.540401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.540533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.540568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.540705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.540739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.540987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.541022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.541157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.541190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.541370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.541404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.541509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.541542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.541681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.541714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.541902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.541936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.542051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.542084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.542275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.542308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.542496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.542530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.542719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.542752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.542887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.542923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.543100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.543134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.543253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.543287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.543537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.543571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.543700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.543733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.543924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.543959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.544138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.544170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.544284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.544316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.544429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.544463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.544653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.544686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.544811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.544844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.544981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.545014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.545137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.545170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.545278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.545311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 [2024-12-16 02:58:10.545443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.009 [2024-12-16 02:58:10.545475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.009 qpair failed and we were unable to recover it. 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Write completed with error (sct=0, sc=8) 00:36:40.009 starting I/O failed 00:36:40.009 Read completed with error (sct=0, sc=8) 00:36:40.010 starting I/O failed 00:36:40.010 [2024-12-16 02:58:10.546128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:40.010 [2024-12-16 02:58:10.546223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.546262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.546463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.546497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.546768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.546802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.547000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.547034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.547144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.547177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.547349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.547383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.547558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.547590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.547765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.547798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.548000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.548034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.548304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.548338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.548530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.548563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.548745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.548777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.548920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.548956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.549147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.549180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.549313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.549346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.549467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.549500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.549771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.549804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.549935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.549969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.550093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.550126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.550248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.550281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.550479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.550512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.550707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.550740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.550868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.550902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.551039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.551072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.551287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.551321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.551440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.551473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.551590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.551623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.551811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.551844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.551985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.552018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.552154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.552187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.552358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.552445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.552594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.552631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.552741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.010 [2024-12-16 02:58:10.552775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.010 qpair failed and we were unable to recover it. 00:36:40.010 [2024-12-16 02:58:10.552908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.552971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.553099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.553132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.553316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.553350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.553463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.553497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.553614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.553647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.553844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.553889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.554023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.554057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.554324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.554357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.554542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.554575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.554751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.554785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.554985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.555019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.555144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.555178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.555309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.555342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.555458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.555491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.555636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.555669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.555800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.555834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.555954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.555988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.556193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.556227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.556410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.556444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.556568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.556602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.556799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.556833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.557034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.557068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.557256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.557289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.557416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.557449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.557563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.557596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.557729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.557763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.557940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.557975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.558261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.558294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.558403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.558437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.558558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.558591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.558704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.558738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.558858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.558892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.559022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.559056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.559181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.559213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.559330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.559363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.559499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.559533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.559643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.559675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.559795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.559829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.559975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.011 [2024-12-16 02:58:10.560010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.011 qpair failed and we were unable to recover it. 00:36:40.011 [2024-12-16 02:58:10.560122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.560155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.560274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.560313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.560494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.560527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.560650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.560683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.560873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.560908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.561032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.561064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.561185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.561219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.561332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.561365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.561476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.561509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.561717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.561750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.561922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.561957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.562083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.562116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.562306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.562339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.562461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.562510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.562688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.562722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.562859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.562893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.563011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.563044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.563290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.563323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.563442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.563475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.564934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.564988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.565202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.565234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.565479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.565514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.565639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.565671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.565788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.565821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.565985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.566019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.566144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.566176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.566354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.566387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.566521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.566554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.566682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.566716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.566837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.566881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.567007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.567039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.567166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.567199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.569020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.569075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.569212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.569245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.569426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.569461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.569712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.012 [2024-12-16 02:58:10.569744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.012 qpair failed and we were unable to recover it. 00:36:40.012 [2024-12-16 02:58:10.569938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.569967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.570106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.570135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.570248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.570291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.570413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.570446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.570648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.570681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.570785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.570826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.571014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.571050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.571160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.571189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.571360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.571393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.571507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.571540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.571663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.571696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.571905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.571950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.572069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.572097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.572290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.572317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.572474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.572502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.572667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.572700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.572821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.572863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.573066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.573098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.573286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.573337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.573598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.573632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.573768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.573802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.573943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.573973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.574146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.574190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.574367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.574401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.574573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.574606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.574791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.574819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.575006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.575035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.575150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.575178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.575411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.575439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.575553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.575581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.575709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.575737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.575855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.575883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.576054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.576083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.576204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.576232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.576411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.576455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.576653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.576686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.013 [2024-12-16 02:58:10.576811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.013 [2024-12-16 02:58:10.576845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.013 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.577077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.577105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.577277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.577310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.577576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.577609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.577727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.577760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.577883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.577916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.578101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.578130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.578310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.578343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.578553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.578586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.578797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.578837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.578979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.579008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.579115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.579143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.579248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.579275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.579385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.579414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.579599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.579627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.579728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.579755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.579956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.580003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.580182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.580215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.580350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.580380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.580484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.580515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.580631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.580662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.580785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.580815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.582119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.582167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.582393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.582437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.582623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.582653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.582843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.582884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.583076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.583110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.583255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.583306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.583435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.583467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.583589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.583621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.583794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.583827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.584017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.584048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.584146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.584176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.584349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.584380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.014 qpair failed and we were unable to recover it. 00:36:40.014 [2024-12-16 02:58:10.584544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.014 [2024-12-16 02:58:10.584575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.584744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.584777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.584915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.584951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.585229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.585262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.585377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.585410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.585582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.585615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.585787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.585819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.585971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.586002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.586118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.586149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.586319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.586349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.586473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.586506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.586624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.586657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.586862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.586895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.587012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.587045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.587228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.587261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.587366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.587405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.587508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.587541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.587682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.587712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.587819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.587913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.588038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.588069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.588188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.588219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.589476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.589523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.589648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.589679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.589878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.589911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.590034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.590064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.590234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.590267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.590440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.590473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.590589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.590622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.590810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.590843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.591119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.591153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.591285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.591318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.591446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.591478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.591612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.591656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.591784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.591812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.591997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.592026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.592119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.592145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.593322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.593365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.593487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.593517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.593750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.593778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.015 qpair failed and we were unable to recover it. 00:36:40.015 [2024-12-16 02:58:10.593881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.015 [2024-12-16 02:58:10.593910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.594139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.594167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.594300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.594328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.594546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.594632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.594782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.594821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.594951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.594986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.595173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.595206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.595376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.595409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.595511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.595544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.595784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.595817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.595968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.596001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.596143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.596176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.596347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.596379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.596510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.596542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.596720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.596753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.596879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.596914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.597042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.597083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.597203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.597236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.599090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.599147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.599369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.599405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.599608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.599641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.599845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.599888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.600083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.600115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.600287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.600319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.600450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.600483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.600608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.600640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.600816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.600862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.600990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.601023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.601188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.601221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.601410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.601442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.601569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.601603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.601715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.601748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.601871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.601905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.602027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.602060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.602174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.602208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.602388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.602420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.602525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.602559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.602772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.602806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.602988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.603021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.016 qpair failed and we were unable to recover it. 00:36:40.016 [2024-12-16 02:58:10.603208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.016 [2024-12-16 02:58:10.603241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.603363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.603397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.603518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.603551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.603737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.603770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.603964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.603999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.604123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.604156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.604288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.604320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.604509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.604541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.604736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.604769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.604897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.604931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.605108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.605141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.605401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.605433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.605560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.605593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.605697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.605729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.605864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.605898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.606068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.606101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.606277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.606309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.606487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.606525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.606632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.606665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.606943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.606977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.607088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.607121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.607308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.607340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.607520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.607552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.607655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.607688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.607806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.607838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.608085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.608118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.608323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.608356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.608489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.608521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.608651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.608684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.608863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.608897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.609089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.609121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.609294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.609327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.609506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.609538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.609652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.609684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.609813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.609845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.609963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.609995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.610114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.610146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.610317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.610349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.610524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.017 [2024-12-16 02:58:10.610555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.017 qpair failed and we were unable to recover it. 00:36:40.017 [2024-12-16 02:58:10.610670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.610701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.610811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.610844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.610969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.611001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.611109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.611141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.611260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.611292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.611507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.611580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.611928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.611998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.612132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.612170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.613991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.614048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.614333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.614368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.614479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.614513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.615815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.615886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.616174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.616207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.617577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.617628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.617862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.617896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.618137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.618170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.618341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.618373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.618505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.618538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.618734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.618775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.618903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.618936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.619063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.619096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.619296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.619330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.619528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.619561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.619697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.619730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.619842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.619883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.620005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.620035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.620154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.620187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.620435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.620468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.620655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.620688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.620874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.620908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.621087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.621120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.621234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.621267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.621459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.621492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.621608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.621641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.621769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.621802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.621940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.621973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.622113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.622146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.622342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.622375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.622513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.018 [2024-12-16 02:58:10.622546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.018 qpair failed and we were unable to recover it. 00:36:40.018 [2024-12-16 02:58:10.622737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.622769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.622942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.622977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.623148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.623181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.623287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.623320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.623430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.623463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.623575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.623608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.623815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.623903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.624128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.624165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.624363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.624397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.624578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.624611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.624843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.624892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.625009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.625042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.625193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.625226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.625344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.625376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.625549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.625581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.625722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.625754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.625994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.626029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.626218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.626251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.626387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.626420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.626534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.626576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.626824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.626868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.626993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.627027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.627152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.627185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.627296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.627328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.627567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.627600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.627724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.627757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.627934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.627967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.628076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.628109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.628234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.628267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.628373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.628406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.628513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.628546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.628657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.628689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.628838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.628879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.628998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.019 [2024-12-16 02:58:10.629032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.019 qpair failed and we were unable to recover it. 00:36:40.019 [2024-12-16 02:58:10.629202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.629234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.629354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.629387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.629633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.629667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.629859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.629894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.630004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.630037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.630226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.630259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.630432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.630466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.630576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.630608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.630749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.630783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.630969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.631003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.631173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.631205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.631389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.631422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.631548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.631582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.631768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.631801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.632074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.632107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.632226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.632260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.632368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.632401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.632530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.632563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.632743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.632777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.632898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.632934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.633104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.633137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.633348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.633381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.633557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.633590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.633708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.633741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.633924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.633958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.634190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.634235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.634362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.634396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.634582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.634615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.634811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.634844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.635047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.635080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.635201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.635234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.635350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.635383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.635561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.635594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.635808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.635840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.636026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.636060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.636185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.636218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.636403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.636437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.636608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.636641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.636758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.020 [2024-12-16 02:58:10.636792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.020 qpair failed and we were unable to recover it. 00:36:40.020 [2024-12-16 02:58:10.636922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.636956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.637130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.637164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.637427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.637460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.637643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.637676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.637802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.637835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.637960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.637995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.638114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.638146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.638358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.638391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.639809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.639873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.640020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.640054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.640258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.640291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.640476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.640509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.640634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.640668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.640807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.640841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.640994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.641029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.641205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.641239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.641378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.641411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.641514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.641548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.641730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.641762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.641945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.641980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.021 [2024-12-16 02:58:10.642092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.021 [2024-12-16 02:58:10.642125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.021 qpair failed and we were unable to recover it. 00:36:40.307 [2024-12-16 02:58:10.643908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.307 [2024-12-16 02:58:10.643964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.307 qpair failed and we were unable to recover it. 00:36:40.307 [2024-12-16 02:58:10.644205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.307 [2024-12-16 02:58:10.644240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.307 qpair failed and we were unable to recover it. 00:36:40.307 [2024-12-16 02:58:10.644457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.307 [2024-12-16 02:58:10.644490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.307 qpair failed and we were unable to recover it. 00:36:40.307 [2024-12-16 02:58:10.644672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.644705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.644883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.644917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.645114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.645147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.645266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.645300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.645407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.645440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.645576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.645608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.645713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.645745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.645951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.645986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.646119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.646152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.646335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.646368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.646483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.646516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.646696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.646730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.646930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.646965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.647145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.647178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.647307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.647340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.647452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.647485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.647729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.647763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.647949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.647983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.648088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.648122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.648236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.648269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.648386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.648419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.648611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.648645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.648910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.648945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.649060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.649093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.649289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.649322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.649438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.649471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.649589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.649621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.649790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.649823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.649948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.649983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.650112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.650151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.650268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.650301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.650428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.650460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.650673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.650705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.650887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.650919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.651031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.651061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.651297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.651327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.651436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.651467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.308 qpair failed and we were unable to recover it. 00:36:40.308 [2024-12-16 02:58:10.651663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.308 [2024-12-16 02:58:10.651693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.651885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.651916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.652040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.652069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.652187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.652217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.652396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.652426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.652539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.652569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.652742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.652773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.652952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.652983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.653110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.653142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.653243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.653276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.653402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.653434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.654747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.654795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.655007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.655039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.655278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.655311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.655593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.655624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.655758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.655791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.655935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.655966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.656175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.656208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.656381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.656413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.656535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.656568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.656759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.656792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.657040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.657075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.657217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.657260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.657473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.657504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.657604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.657633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.657754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.657783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.657902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.657934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.658067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.658097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.658305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.658334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.658506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.658536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.658648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.658677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.658783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.658814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.658999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.659037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.659143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.659173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.659410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.659444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.659554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.659586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.659769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.309 [2024-12-16 02:58:10.659801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.309 qpair failed and we were unable to recover it. 00:36:40.309 [2024-12-16 02:58:10.659953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.659983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.660153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.660182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.660310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.660343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.660523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.660555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.660666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.660698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.660816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.660858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.661074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.661107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.661278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.661311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.661418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.661450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.661575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.661608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.661780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.661812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.662000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.662033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.662157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.662189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.662307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.662339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.662466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.662499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.662608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.662640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.662744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.662774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.662889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.662924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.663097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.663128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.663319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.663352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.663479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.663512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.663705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.663738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.663922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.663956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.664086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.664120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.664302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.664335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.664523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.664556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.664662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.664695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.664902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.664938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.665054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.665088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.665215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.665247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.665437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.665470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.665589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.665622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.665794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.665825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.666019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.666053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.666190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.666223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.310 [2024-12-16 02:58:10.666421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.310 [2024-12-16 02:58:10.666459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.310 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.666649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.666682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.666861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.666895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.667014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.667047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.667223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.667254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.667437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.667470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.667684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.667717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.667957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.667991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.668106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.668140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.668249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.668281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.668400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.668432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.669792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.669843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.670150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.670185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.670301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.670335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.670604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.670638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.670831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.670884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.671002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.671032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.671161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.671191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.671310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.671338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.671456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.671486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.671655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.671685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.671866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.671898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.672012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.672042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.672243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.672272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.672370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.672400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.672632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.672665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.672784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.672816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.673058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.673093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.673229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.673259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.673444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.673476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.673586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.673618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.673739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.673773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.673898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.673932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.674054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.674084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.674255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.674284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.674474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.674506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.674614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.674646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.674771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.311 [2024-12-16 02:58:10.674803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.311 qpair failed and we were unable to recover it. 00:36:40.311 [2024-12-16 02:58:10.674999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.675035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.675208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.675241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.675371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.675406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.675640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.675672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.675872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.675903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.676039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.676069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.676264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.676296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.676426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.676458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.676629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.676661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.676793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.676837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.676960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.676987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.677087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.677117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.677231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.677261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.677440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.677470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.677591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.677623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.677764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.677796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.678025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.678060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.678171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.678201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.678312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.678342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.678442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.678472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.678595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.678624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.678834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.678872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.678986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.679016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.679130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.679160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.679259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.679286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.679549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.679579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.679702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.679733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.679835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.679872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.680104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.680134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.680344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.680374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.680556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.680586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.680699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.680728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.680837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.312 [2024-12-16 02:58:10.680901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.312 qpair failed and we were unable to recover it. 00:36:40.312 [2024-12-16 02:58:10.681078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.681109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.681226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.681256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.681431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.681461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.681559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.681590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.681784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.681815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.682024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.682058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.682182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.682214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.682344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.682373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.682557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.682589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.682724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.682762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.682893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.682928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.683050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.683083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.683315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.683344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.683470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.683499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.683688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.683717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.683912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.683942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.684123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.684153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.684270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.684300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.684421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.684450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.684565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.684594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.684801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.684832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.684956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.684985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.685095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.685125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.685294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.685324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.685523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.685556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.685684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.685716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.685835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.685907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.686144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.686178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.686305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.686335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.686503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.686545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.686718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.686750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.686991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.687025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.687210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.687240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.687348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.687377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.313 qpair failed and we were unable to recover it. 00:36:40.313 [2024-12-16 02:58:10.687557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.313 [2024-12-16 02:58:10.687586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.687702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.687731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.687840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.687877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.687988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.688019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.688270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.688299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.688400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.688430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.688661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.688691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.688818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.688855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.688966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.688997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.689189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.689218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.689388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.689418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.689588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.689617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.689725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.689754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.689886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.689917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.690039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.690085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.690274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.690331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.690537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.690574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.690831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.690884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.691021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.691054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.691174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.691211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.691407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.691439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.691554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.691586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.691707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.691743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.691927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.691971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.692105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.692138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.692321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.692364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.692489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.692522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.692695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.692728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.692921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.692965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.693177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.693214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.314 qpair failed and we were unable to recover it. 00:36:40.314 [2024-12-16 02:58:10.693399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.314 [2024-12-16 02:58:10.693435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.693631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.693675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.693882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.693917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.694041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.694074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.694249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.694283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.694595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.694630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.694746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.694778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.694886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.694933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.695153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.695186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.695291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.695323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.695439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.695478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.695673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.695709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.695845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.695900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.696193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.696229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.696365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.696398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.696659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.696695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.696819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.696869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.697005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.697038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.697253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.697289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.697410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.697443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.697688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.697725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.697939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.697975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.698227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.698268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.698413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.698446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.698634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.698667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.698955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.699000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.699197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.699230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.699468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.699504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.699688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.699721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.699859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.699904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.700039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.700074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.701523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.701581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.701875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.701913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.702057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.702094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.702344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.702379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.702501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.702534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.702675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.702712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.315 qpair failed and we were unable to recover it. 00:36:40.315 [2024-12-16 02:58:10.702843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.315 [2024-12-16 02:58:10.702910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.703040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.703075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.703325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.703362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.703495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.703528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.703657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.703691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.703945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.703983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.704170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.704203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.704391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.704428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.704553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.704587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.704770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.704816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.705025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.705060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.705247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.705283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.705405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.705436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.705567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.705608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.705783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.705816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.706029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.706069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.706360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.706402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.706533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.706567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.706744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.706778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.706910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.706944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.707076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.707112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.707295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.707328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.707572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.707608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.707798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.707833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.708040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.708078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.708278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.708313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.708532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.708568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.708703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.708737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.708919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.708964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.709158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.709193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.709349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.709387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.709563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.709597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.709767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.709812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.710010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.710044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.710242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.710275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.710465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.710501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.710631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.710664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.710902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.710948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.711153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.316 [2024-12-16 02:58:10.711190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.316 qpair failed and we were unable to recover it. 00:36:40.316 [2024-12-16 02:58:10.711316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.711353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.711482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.711516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.711634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.711667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.711870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.711911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.712024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.712058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.712240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.712281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.712481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.712515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.712653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.712690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.712911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.712948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.713072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.713111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.713299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.713333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.713453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.713487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.713613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.713650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.713825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.713871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.714082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.714118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.714225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.714258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.714445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.714483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.714618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.714652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.714937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.714978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.715106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.715142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.715339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.715374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.715634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.715672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.715871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.715909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.716124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.716169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.716443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.716487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.716704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.716739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.716924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.716971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.717169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.717214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.717393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.717430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.717554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.717600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.717805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.717841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.718011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.718060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.718239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.718275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.317 qpair failed and we were unable to recover it. 00:36:40.317 [2024-12-16 02:58:10.718468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.317 [2024-12-16 02:58:10.718505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.718629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.718671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.718925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.718963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.719159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.719196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.719330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.719365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.719557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.719594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.719844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.719894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.720089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.720131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.720336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.720371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.720545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.720578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.720713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.720756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.720935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.720970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.721106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.721139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.721332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.721375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.721559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.721592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.721777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.721813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.722025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.722063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.722195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.722231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.722473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.722509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.722717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.722760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.722945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.722980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.723165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.723198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.723447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.723483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.723699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.723733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.723862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.723904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.724079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.724113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.724383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.724419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.724535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.724572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.724820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.724873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.725001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.725035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.725248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.725285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.725415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.725452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.725630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.725666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.725865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.725902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.726091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.726127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.726315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.726359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.726625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.726666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.726839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.726889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.727137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.727174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.318 [2024-12-16 02:58:10.727416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.318 [2024-12-16 02:58:10.727449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.318 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.727695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.727732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.727922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.727961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.728073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.728105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.728323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.728359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.728568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.728614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.728727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.728771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.728995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.729033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.729182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.729219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.729403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.729441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.729582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.729619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.729900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.729938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.730145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.730183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.730358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.730394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.730635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.730683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.730870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.730903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.731092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.731122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.731225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.731266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.731455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.731487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.731665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.731697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.731890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.731928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.732139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.732173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.732418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.732461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.732670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.732705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.732967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.733004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.733161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.733198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.733398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.733433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.733547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.733580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.733759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.733794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.733914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.733948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.734188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.734221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.734430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.734466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.734662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.734693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.734806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.734838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.735078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.735113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.735236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.735267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.735471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.735506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.735645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.735687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.735820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.735874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.319 [2024-12-16 02:58:10.735993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.319 [2024-12-16 02:58:10.736028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.319 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.736231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.736266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.736399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.736443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.736566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.736599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.736785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.736821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.736949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.736984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.737121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.737156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.737282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.737323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.737447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.737479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.737621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.737657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.737904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.737941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.738068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.738112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.738300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.738333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.738506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.738536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.738732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.738766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.738892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.738926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.739076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.739108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.739246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.739281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.739397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.739432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.739634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.739668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.739802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.739842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.740051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.740089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.740218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.740249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.740431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.740466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.740580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.740612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.740791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.740824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.740967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.741002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.741243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.741274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.741404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.741442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.741580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.741615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.741733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.741775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.741908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.741945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.742192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.742227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.742419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.742454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.742587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.742618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.742737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.742768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.742908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.742945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.743079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.743111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.743297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.743344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.743553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.320 [2024-12-16 02:58:10.743588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.320 qpair failed and we were unable to recover it. 00:36:40.320 [2024-12-16 02:58:10.743702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.743746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.743939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.743977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.744163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.744200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.744411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.744448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.744658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.744694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.744823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.744872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.744996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.745028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.745296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.745330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.745435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.745467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.745703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.745738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.745874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.745908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.746093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.746125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.746271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.746307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.746438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.746475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.746617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.746652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.746764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.746802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.747019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.747055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.747180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.747214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.747418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.747456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.747646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.747681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.747865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.747901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.748010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.748043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.748165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.748199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.748399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.748431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.748543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.748575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.748713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.748749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.748951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.748984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.749153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.749194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.749403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.749440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.749564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.749601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.749795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.749833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.750019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.750057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.750230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.321 [2024-12-16 02:58:10.750268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.321 qpair failed and we were unable to recover it. 00:36:40.321 [2024-12-16 02:58:10.750522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.750557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.750770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.750812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.750957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.750991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.751134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.751166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.751277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.751308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.751493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.751535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.751727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.751758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.751880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.751917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.752052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.752085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.752220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.752255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.752366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.752398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.752504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.752548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.752760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.752792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.752984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.753017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.753175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.753211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.753386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.753418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.753534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.753567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.753775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.753808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.754010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.754042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.754175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.754218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.754333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.754366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.754542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.754577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.754763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.754806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.755072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.755106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.755301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.755343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.755463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.755495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.755602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.755634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.755768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.755799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.755943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.755980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.756170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.756208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.756339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.756375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.756499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.756531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.756644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.756691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.756822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.756869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.757051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.757086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.757270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.757308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.757452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.757486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.757619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.757651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.757839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.322 [2024-12-16 02:58:10.757907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.322 qpair failed and we were unable to recover it. 00:36:40.322 [2024-12-16 02:58:10.758108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.758148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.758289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.758321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.758449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.758486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.758619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.758650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.758904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.758941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.759060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.759092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.759200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.759235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.759357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.759389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.759533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.759571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.759703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.759736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.759864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.759897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.760035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.760071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.760264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.760305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.760426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.760461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.760580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.760612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.760733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.760767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.760896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.760931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.761050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.761093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.761207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.761243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.761421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.761464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.761658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.761694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.761870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.761908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.762033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.762065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.762218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.762263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.762376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.762408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.762524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.762559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.762683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.762723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.762921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.762957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.763095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.763127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.763249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.763278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.763390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.763427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.763540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.763570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.763668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.763696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.763814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.763879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.763994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.764023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.764224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.764256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.764425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.764462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.764638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.764669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.764856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.323 [2024-12-16 02:58:10.764893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.323 qpair failed and we were unable to recover it. 00:36:40.323 [2024-12-16 02:58:10.764999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.765030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.765143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.765172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.765356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.765389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.765499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.765529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.765660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.765692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.765875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.765912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.766036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.766069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.766274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.766307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.766463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.766498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.766619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.766649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.766811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.766839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.766997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.767027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.767201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.767232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.767342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.767370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.767551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.767580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.767752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.767785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.767975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.768014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.768157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.768192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.768316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.768357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.768578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.768612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.768880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.768917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.769078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.769113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.769308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.769344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.769477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.769514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.769699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.769732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.769862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.769900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.770007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.770036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.770223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.770256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.770383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.770417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.770589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.770617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.770813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.770857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.771036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.771065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.771237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.771269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.771443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.771472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.771713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.771756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.771932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.771964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.772141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.772173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.772304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.772339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.772561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.324 [2024-12-16 02:58:10.772592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.324 qpair failed and we were unable to recover it. 00:36:40.324 [2024-12-16 02:58:10.772715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.772752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.772881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.772912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.773060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.773091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.773280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.773313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.773429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.773459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.773620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.773649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.773745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.773781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.773975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.774006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.774124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.774154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.774416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.774448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.774621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.774652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.774768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.774797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.774956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.774990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.775089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.775119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.775228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.775259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.775375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.775403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.775653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.775686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.775876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.775907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.776078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.776110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.776274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.776303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.776421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.776450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.776557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.776596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.776734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.776765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.776971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.777005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.777134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.777166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.777341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.777380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.777517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.777546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.777727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.777755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.777927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.777964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.778083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.778112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.778228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.778257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.778422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.778460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.778633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.778665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.778768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.778801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.778944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.778977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.779108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.779155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.779259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.779288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.779478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.779509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.779631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.779663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.325 qpair failed and we were unable to recover it. 00:36:40.325 [2024-12-16 02:58:10.779926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.325 [2024-12-16 02:58:10.779964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.780098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.780127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.780249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.780277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.780460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.780497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.780599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.780627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.780755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.780783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.780950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.780982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.781098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.781131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.781333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.781365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.781554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.781591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.781728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.781758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.781874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.781904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.782018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.782046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.782239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.782270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.782506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.782535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.782638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.782671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.782922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.782956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.783091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.783123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.783235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.783266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.783448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.783488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.783600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.783630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.783733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.783762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.783911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.783942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.784131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.784163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.784271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.784300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.784404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.784434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.784609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.784645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.784764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.784793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.784932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.784974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.785097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.785132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.785325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.785357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.785470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.785505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.785658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.785688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.785809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.326 [2024-12-16 02:58:10.785843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.326 qpair failed and we were unable to recover it. 00:36:40.326 [2024-12-16 02:58:10.786054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.786087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.786205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.786234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.786353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.786397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.786536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.786566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.786743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.786771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.786942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.786975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.787092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.787121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.787221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.787250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.787345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.787374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.787486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.787525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.787641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.787671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.787790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.787828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.787956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.787986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.788102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.788131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.788378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.788410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.788615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.788644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.788759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.788790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.788902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.788932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.789044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.789073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.789253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.789289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.789430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.789460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.789571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.789608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.789735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.789765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.789890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.789927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.790047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.790078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.790187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.790216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.790382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.790411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.790514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.790545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.790729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.790758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.790871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.790901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.791021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.791052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.791230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.791261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.791371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.791410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.791598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.791627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.791810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.791842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.791986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.792015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.792222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.792254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.792378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.792408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.792512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.327 [2024-12-16 02:58:10.792540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.327 qpair failed and we were unable to recover it. 00:36:40.327 [2024-12-16 02:58:10.792649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.792678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.792856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.792898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.793141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.793180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.793296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.793331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.793439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.793469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.793645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.793677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.793777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.793805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.793921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.793951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.794064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.794096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.794287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.794316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.794442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.794471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.794589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.794623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.794750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.794779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.794887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.794927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.795172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.795204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.795377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.795413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.795525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.795555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.795727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.795763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.795931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.795962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.796148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.796180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.796290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.796320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.796463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.796494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.796670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.796699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.796813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.796842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.797002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.797033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.797200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.797230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.797337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.797365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.797558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.797590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.797788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.797817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.797951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.797984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.798090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.798120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.798363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.798400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.798515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.798544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.798663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.798691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.798810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.798844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.799123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.799155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.799255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.799284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.799392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.799424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.799549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.799578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.328 qpair failed and we were unable to recover it. 00:36:40.328 [2024-12-16 02:58:10.799691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.328 [2024-12-16 02:58:10.799719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.799827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.799871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.800011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.800043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.800146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.800175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.800407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.800445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.800552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.800581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.800700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.800730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.800834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.800888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.801100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.801129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.801232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.801267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.801451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.801480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.801588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.801617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.801740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.801779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.801964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.801994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.802115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.802163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.802284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.802317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.802424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.802457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.802670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.802708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.802888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.802922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.803104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.803140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.803384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.803416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.803610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.803646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.803775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.803807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.804001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.804037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.804149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.804182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.804293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.804324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.804456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.804492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.804625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.804657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.804905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.804942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.805052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.805084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.805197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.805228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.805411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdadc70 is same with the state(6) to be set 00:36:40.329 [2024-12-16 02:58:10.805685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.805756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.805930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.805977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.806103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.806137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.806240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.806273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.806388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.806420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.806552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.806583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.806703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.806734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.806910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.329 [2024-12-16 02:58:10.806944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.329 qpair failed and we were unable to recover it. 00:36:40.329 [2024-12-16 02:58:10.807129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.807161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.807263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.807294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.807412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.807444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.807626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.807657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.807762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.807794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.807918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.807951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.808078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.808110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.808251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.808284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.808508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.808540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.808647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.808679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.808797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.808829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.808945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.808977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.809155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.809187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.809312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.809344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.809444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.809475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.809712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.809744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.809861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.809894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.810072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.810104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.810229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.810267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.810392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.810424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.810529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.810560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.810754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.810786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.810985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.811020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.811247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.811278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.811379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.811410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.811598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.811630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.811815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.811858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.812031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.812063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.812183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.812215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.812454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.812485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.812608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.812640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.812754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.812786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.812996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.813030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.813151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.330 [2024-12-16 02:58:10.813183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.330 qpair failed and we were unable to recover it. 00:36:40.330 [2024-12-16 02:58:10.813296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.813328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.813450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.813483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.813588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.813622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.813726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.813758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.813885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.813919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.814109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.814142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.814267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.814298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.814422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.814454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.814626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.814659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.814776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.814807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.814935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.814969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.815144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.815177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.815299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.815331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.815457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.815489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.815598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.815631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.815797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.815829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.815959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.815991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.816166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.816199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.816316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.816349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.816610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.816643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.816758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.816790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.816918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.816959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.817085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.817117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.817300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.817331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.817508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.817547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.817673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.817705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.817819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.817861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.817974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.818006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.818133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.818165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.818274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.818306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.818543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.818575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.818696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.818728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.818845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.818887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.818992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.819024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.819141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.819173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.819384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.819415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.819587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.819619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.819726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.819758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.819933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.819967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.331 qpair failed and we were unable to recover it. 00:36:40.331 [2024-12-16 02:58:10.820145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.331 [2024-12-16 02:58:10.820178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.820326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.820358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.820530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.820562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.820696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.820727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.820860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.820893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.821002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.821034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.821221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.821253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.821431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.821462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.821566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.821597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.821710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.821741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.821963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.821996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.822117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.822150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.822263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.822297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.822406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.822437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.822540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.822572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.822705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.822736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.822928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.822961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.823243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.823276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.823442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.823475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.823604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.823635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.823807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.823838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.823965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.823998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.824108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.824140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.824247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.824278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.824443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.824474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.824590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.824628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.824800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.824831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.825079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.825111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.825219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.825250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.825444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.825476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.825577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.825609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.825713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.825744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.825920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.825954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.826062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.826093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.826279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.826310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.826420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.826452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.826565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.826596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.826783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.826815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.826976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.332 [2024-12-16 02:58:10.827009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.332 qpair failed and we were unable to recover it. 00:36:40.332 [2024-12-16 02:58:10.827227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.827259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.827449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.827485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.827594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.827625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.827830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.827888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.828079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.828111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.828227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.828258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.828438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.828469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.828579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.828614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.828716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.828746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.828877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.828910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.829018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.829050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.829154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.829187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.829426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.829458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.829595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.829627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.829743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.829775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.829950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.829982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.830102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.830133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.830240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.830272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.830383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.830415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.830524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.830555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.830746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.830778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.830951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.830985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.831103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.831134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.831264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.831296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.831474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.831505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.831605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.831636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.831874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.831913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.832104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.832136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.832334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.832367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.832549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.832581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.832704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.832736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.832839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.832878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.833120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.833152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.833262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.833294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.833420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.833452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.833625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.833657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.833827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.833867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.833975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.834006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.834245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.333 [2024-12-16 02:58:10.834278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.333 qpair failed and we were unable to recover it. 00:36:40.333 [2024-12-16 02:58:10.834382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.834414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.834638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.834670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.834785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.834817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.834945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.834978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.835150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.835183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.835420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.835452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.835637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.835668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.835932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.835968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.836166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.836198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.836303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.836335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.836455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.836488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.836675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.836706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.836839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.836881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.836988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.837021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.837194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.837227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.837424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.837457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.837632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.837664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.837835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.837878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.838113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.838146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.838325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.838357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.838461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.838494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.838671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.838702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.838877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.838909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.839106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.839138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.839253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.839285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.839494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.839526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.839714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.839746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.839864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.839909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.840092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.840124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.840431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.840463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.840643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.840675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.840874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.840907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.841078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.841110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.841307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.841338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.841450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.841482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.841659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.841690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.841799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.841830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.841959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.841992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.842168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.842200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.334 qpair failed and we were unable to recover it. 00:36:40.334 [2024-12-16 02:58:10.842392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.334 [2024-12-16 02:58:10.842423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.842663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.842695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.842889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.842922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.843100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.843132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.843317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.843349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.843462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.843494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.843732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.843764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.843959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.843992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.844174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.844209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.844395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.844427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.844619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.844651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.844775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.844806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.844993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.845026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.845153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.845185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.845375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.845406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.845585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.845617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.845798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.845830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.846030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.846063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.846255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.846286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.846408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.846440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.846568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.846600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.846715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.846746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.846920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.846952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.847136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.847168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.847336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.847368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.847554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.847585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.847777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.847809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.848002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.848036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.848143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.848180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.848292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.848324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.848442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.848474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.848756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.848787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.848885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.848913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.849102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.849134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.849323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.335 [2024-12-16 02:58:10.849355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.335 qpair failed and we were unable to recover it. 00:36:40.335 [2024-12-16 02:58:10.849546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.849578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.849749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.849781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.849914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.849947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.850087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.850119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.850307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.850338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.850454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.850485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.850620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.850652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.850863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.850896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.851101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.851132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.851395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.851427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.851536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.851567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.851691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.851722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.851870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.851902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.852032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.852064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.852233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.852264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.852385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.852417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.852521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.852552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.852756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.852787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.852978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.853011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.853134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.853165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.853284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.853316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.853444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.853475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.853591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.853622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.853864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.853897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.854079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.854111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.854244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.854276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.854484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.854515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.854635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.854666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.854772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.854804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.854940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.854973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.855076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.855107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.855231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.855262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.855381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.855413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.855520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.855558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.855681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.855712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.855900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.855933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.856047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.856079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.856196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.856227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.856335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.856366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.336 qpair failed and we were unable to recover it. 00:36:40.336 [2024-12-16 02:58:10.856472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.336 [2024-12-16 02:58:10.856503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.856742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.856773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.856898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.856931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.857057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.857089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.857205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.857236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.857406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.857437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.857559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.857591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.857697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.857728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.857843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.857883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.858067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.858099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.858280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.858312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.858421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.858452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.858637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.858668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.858796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.858828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.858952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.858984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.859110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.859141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.859332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.859365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.859482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.859514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.859696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.859727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.859842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.859884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.860002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.860033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.860215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.860287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.860487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.860522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.860710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.860743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.860924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.860960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.861067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.861098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.861272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.861303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.861418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.861449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.861570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.861601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.861708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.861739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.861958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.861993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.862180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.862212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.862453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.862484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.862595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.862627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.862798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.862829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.863037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.863071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.863256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.863287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.863411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.863442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.863545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.863577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.337 [2024-12-16 02:58:10.863684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.337 [2024-12-16 02:58:10.863715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.337 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.863884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.863917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.864033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.864064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.864193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.864223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.864352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.864384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.864557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.864589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.864694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.864725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.864913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.864946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.865070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.865101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.865290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.865330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.865449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.865480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.865612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.865643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.865829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.865888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.866072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.866104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.866288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.866319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.866530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.866561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.866680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.866710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.866810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.866842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.866972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.867004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.867195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.867228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.867409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.867442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.867626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.867657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.867824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.867865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.867979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.868011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.868144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.868176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.868304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.868336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.868437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.868468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.868579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.868610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.868716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.868747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.868921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.868955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.869077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.869110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.869224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.869255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.869429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.869462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.869576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.869608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.869791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.869823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.870032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.870067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.870185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.870222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.870327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.870358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.870555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.870588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.870724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.338 [2024-12-16 02:58:10.870756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.338 qpair failed and we were unable to recover it. 00:36:40.338 [2024-12-16 02:58:10.870884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.870919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.871094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.871125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.871230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.871261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.871433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.871465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.871596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.871628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.871728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.871759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.871882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.871915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.872138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.872170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.872361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.872392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.872566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.872597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.872815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.872859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.872981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.873012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.873143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.873174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.873361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.873392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.873499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.873530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.873739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.873770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.873979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.874012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.874143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.874176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.874359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.874390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.874562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.874593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.874762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.874793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.874918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.874951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.875061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.875091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.875193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.875224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.875412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.875444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.875563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.875595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.875765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.875796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.875985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.876019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.876206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.876236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.876359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.876391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.876491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.876523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.876722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.876753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.876879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.876912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.877021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.877052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.339 qpair failed and we were unable to recover it. 00:36:40.339 [2024-12-16 02:58:10.877235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.339 [2024-12-16 02:58:10.877265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.877436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.877467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.877567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.877597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.877782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.877820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.877961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.877993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.878113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.878143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.878405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.878436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.878550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.878582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.878771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.878802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.879073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.879107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.879317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.879348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.879456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.879487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.879591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.879621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.879817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.879857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.879959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.879991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.880164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.880195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.880293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.880324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.880464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.880496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.880606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.880637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.880873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.880906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.881093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.881125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.881384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.881415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.881603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.881634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.881742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.881773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.881949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.881982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.882244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.882277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.882448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.882479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.882603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.882634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.882866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.882898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.883015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.883046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.883219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.883250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.883360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.883391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.883515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.883546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.883718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.883749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.883927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.883960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.884201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.884232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.884405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.884437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.884682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.884713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.884819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.884859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.884980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.340 [2024-12-16 02:58:10.885011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.340 qpair failed and we were unable to recover it. 00:36:40.340 [2024-12-16 02:58:10.885196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.885227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.885345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.885377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.885583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.885614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.885723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.885754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.885884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.885918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.886095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.886127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.886298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.886329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.886503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.886535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.886660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.886691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.886870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.886903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.887096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.887127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.887372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.887404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.887538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.887570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.887675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.887706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.887834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.887873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.888046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.888077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.888275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.888306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.888480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.888511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.888686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.888718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.888898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.888931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.889112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.889143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.889335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.889366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.889496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.889527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.889738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.889769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.890012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.890046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.890290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.890322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.890497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.890529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.890734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.890766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.890985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.891018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.891123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.891154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.891348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.891379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.891651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.891695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.891875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.891907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.892146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.892178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.892283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.892315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.892500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.892531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.892638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.892669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.892865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.892897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.893093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.893125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.341 qpair failed and we were unable to recover it. 00:36:40.341 [2024-12-16 02:58:10.893262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.341 [2024-12-16 02:58:10.893293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.893469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.893505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.893691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.893722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.893868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.893901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.894089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.894120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.894227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.894259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.894368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.894400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.894636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.894668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.894928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.894961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.895149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.895180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.895367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.895398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.895520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.895551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.895786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.895818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.895940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.895972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.896151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.896182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.896369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.896399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.896583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.896615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.896885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.896917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.897050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.897082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.897270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.897301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.897501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.897533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.897775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.897806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.897939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.897973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.898144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.898176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.898346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.898378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.898502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.898534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.898744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.898776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.898948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.898982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.899107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.899138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.899241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.899272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.899463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.899494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.899691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.899721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.899903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.899936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.900147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.900180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.900317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.900348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.900520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.900552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.900732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.900763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.900962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.900995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.901106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.901137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.901339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.342 [2024-12-16 02:58:10.901371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.342 qpair failed and we were unable to recover it. 00:36:40.342 [2024-12-16 02:58:10.901555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.901586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.901755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.901786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.901919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.901952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.902151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.902183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.902366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.902397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.902506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.902536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.902643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.902675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.902878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.902912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.903035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.903066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.903236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.903267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.903530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.903561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.903700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.903731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.903904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.903937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.904057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.904089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.904206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.904237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.904497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.904529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.904658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.904690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.904811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.904843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.904970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.905001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.905115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.905147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.905267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.905304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.905477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.905509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.905693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.905724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.905862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.905895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.906026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.906058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.906230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.906261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.906363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.906399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.906569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.906601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.906717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.906748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.906935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.906969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.907166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.907197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.907379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.907411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.907672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.907705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.907948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.907981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.908161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.908193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.908398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.908429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.908547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.908579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.908813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.908844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.909106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.343 [2024-12-16 02:58:10.909139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.343 qpair failed and we were unable to recover it. 00:36:40.343 [2024-12-16 02:58:10.909324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.909355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.909547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.909578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.909748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.909780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.910038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.910071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.910193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.910224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.910325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.910356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.910480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.910512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.910638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.910669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.910904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.910938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.911126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.911159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.911282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.911313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.911495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.911525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.911662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.911694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.911808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.911839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.912018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.912049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.912228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.912260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.912385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.912416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.912545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.912576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.912677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.912708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.912814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.912852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.913040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.913071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.913246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.913278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.913389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.913426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.913531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.913563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.913814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.913845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.914093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.914126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.914388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.914419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.914609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.914641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.914829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.914872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.915080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.915112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.915376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.915407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.915526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.915557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.915745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.915776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.915979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.916012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.916193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.344 [2024-12-16 02:58:10.916224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.344 qpair failed and we were unable to recover it. 00:36:40.344 [2024-12-16 02:58:10.916464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.916496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.916740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.916772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.917029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.917063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.917249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.917282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.917469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.917499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.917703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.917734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.917980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.918014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.918146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.918177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.918441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.918472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.918588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.918620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.918742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.918773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.919080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.919115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.919224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.919256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.919389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.919421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.919548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.919586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.919709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.919740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.919982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.920014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.920150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.920182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.920421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.920453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.920651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.920682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.920788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.920820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.920947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.920979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.921183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.921215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.921404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.921436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.921672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.921704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.921894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.921928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.922040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.922072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.922260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.922292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.922535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.922568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.922827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.922866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.923127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.923158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.923274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.923306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.923523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.923554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.923725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.923756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.923939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.923972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.924159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.924190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.924301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.924332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.924464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.924496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.924684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.924716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.345 [2024-12-16 02:58:10.924892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.345 [2024-12-16 02:58:10.924925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.345 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.925049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.925081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.925181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.925213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.925349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.925381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.925557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.925588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.925757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.925789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.925986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.926019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.926224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.926255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.926390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.926421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.926526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.926557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.926679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.926711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.926844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.926884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.927098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.927130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.927313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.927344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.927450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.927482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.927669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.927701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.927896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.927934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.928114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.928146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.928252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.928284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.928492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.928523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.928762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.928793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.928991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.929024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.929218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.929249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.929420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.929451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.929635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.929667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.929870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.929903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.930020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.930051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.930162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.930194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.930458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.930489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.930671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.930702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.930834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.930875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.931048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.931080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.931216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.931247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.931416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.931447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.931688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.931719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.931909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.931941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.932077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.932108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.932221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.932252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.932364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.932395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.932578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.346 [2024-12-16 02:58:10.932609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.346 qpair failed and we were unable to recover it. 00:36:40.346 [2024-12-16 02:58:10.932783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.932815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.933023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.933056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.933175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.933206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.933380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.933423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.933608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.933640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.933821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.933880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.934081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.934112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.934243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.934275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.934443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.934475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.934653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.934684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.934883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.934917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.935097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.935129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.935320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.935351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.935472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.935503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.935604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.935636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.935808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.935840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.936030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.936062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.936237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.936307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.936508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.936543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.936678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.936711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.936830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.936880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.936993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.937024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.937151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.937182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.937345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.937376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.937530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.937562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.937808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.937840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.937974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.938005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.938241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.938273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.938466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.938497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.938618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.938649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.938818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.938871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.939003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.939035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.939388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.939419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.347 [2024-12-16 02:58:10.939533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.347 [2024-12-16 02:58:10.939565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.347 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.939700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.939732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.939907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.939940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.940069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.940101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.940280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.940311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.940444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.940473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.940682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.940713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.940960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.940993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.941109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.941140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.941265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.941296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.941501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.941531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.941750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.941782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.941898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.941931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.942057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.942088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.942216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.942248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.942431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.942462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.942584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.942615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.942805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.942836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.942984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.943018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.943194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.943226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.943362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.943394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.943507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.943538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.943831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.943873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.944126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.944158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.628 [2024-12-16 02:58:10.944383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.628 [2024-12-16 02:58:10.944452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.628 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.944592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.944628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.944757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.944790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.944932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.944979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.945125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.945158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.945272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.945303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.945493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.945531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.945652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.945687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.945882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.945916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.946045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.946078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.946212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.946245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.946419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.946450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.946632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.946666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.946944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.946988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.947126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.947158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.947379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.947415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.947587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.947618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.947746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.947777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.948045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.948078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.948264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.948294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.948408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.948439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.948612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.948643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.948841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.948885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.949055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.949086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.949264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.949297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.949570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.949601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.949789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.949820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.949948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.949981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.950219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.950250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.950504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.950535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.950641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.950672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.950797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.950828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.950967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.950999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.951127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.951159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.951388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.951419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.951530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.951562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.951737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.951768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.951949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.629 [2024-12-16 02:58:10.951982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.629 qpair failed and we were unable to recover it. 00:36:40.629 [2024-12-16 02:58:10.952226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.952257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.952425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.952456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.952639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.952671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.952845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.952886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.953064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.953095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.953287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.953317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.953491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.953522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.953707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.953738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.953862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.953894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.954129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.954161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.954276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.954308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.954544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.954574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.954783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.954815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.955097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.955129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.955337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.955367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.955482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.955519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.955687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.955719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.955834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.955876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.956058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.956090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.956264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.956295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.956430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.956460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.956579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.956610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.956858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.956890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.957069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.957100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.957224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.957255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.957514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.957545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.957680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.957711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.957903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.957937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.958176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.958207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.958465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.958497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.958699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.958730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.958990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.959022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.959210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.959240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.959356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.959387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.959555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.959586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.959774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.959805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.960054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.960086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.960343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.960374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.960612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.630 [2024-12-16 02:58:10.960643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.630 qpair failed and we were unable to recover it. 00:36:40.630 [2024-12-16 02:58:10.960815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.960856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.961100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.961132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.961258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.961288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.961566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.961598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.961783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.961814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.962022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.962055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.962233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.962264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.962528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.962559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.962693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.962724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.962895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.962928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.963054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.963085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.963276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.963306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.963494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.963526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.963696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.963726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.963897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.963930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.964203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.964235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.964410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.964446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.964565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.964596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.964711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.964742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.964979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.965011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.965121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.965152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.965360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.965392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.965592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.965623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.965789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.965820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.965951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.965983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.966175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.966207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.966374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.966404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.966517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.966548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.966732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.966763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.966950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.966983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.967257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.967288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.967413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.967444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.967580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.967611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.967783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.967814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.967996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.968028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.968198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.968230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.968412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.968443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.968622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.968653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.968842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.631 [2024-12-16 02:58:10.968889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.631 qpair failed and we were unable to recover it. 00:36:40.631 [2024-12-16 02:58:10.969092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.969123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.969367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.969398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.969501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.969532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.969647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.969678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.969958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.970002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.970182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.970214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.970387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.970419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.970717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.970752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.970953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.970987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.971201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.971236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.971446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.971477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.971597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.971634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.971828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.971875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.971996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.972028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.972157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.972191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.972315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.972347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.972475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.972507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.972718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.972758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.972904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.972938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.973062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.973093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.973332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.973366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.973558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.973590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.973767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.973805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.973956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.973990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.974180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.974211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.974319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.974354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.974548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.974579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.974701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.974732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.974863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.974908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.975047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.975079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.975261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.975293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.975427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.975462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.632 qpair failed and we were unable to recover it. 00:36:40.632 [2024-12-16 02:58:10.975669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.632 [2024-12-16 02:58:10.975701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.975891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.975933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.976192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.976226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.976339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.976371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.976497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.976532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.976721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.976752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.976881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.976918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.977200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.977233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.977370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.977400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.977541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.977574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.977749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.977778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.977964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.978008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.978224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.978261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.978449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.978489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.978628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.978661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.978772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.978802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.979010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.979044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.979236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.979266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.979526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.979559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.979749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.979788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.979992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.980025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.980303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.980334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.980600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.980633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.980844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.980900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.981082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.981115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.981351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.981381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.981576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.981610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.981723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.981753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.981927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.981960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.982152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.982185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.982356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.982386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.982594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.982633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.982832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.982874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.983062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.983093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.983290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.983322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.983513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.983545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.983666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.983704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.983959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.983992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.984174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.633 [2024-12-16 02:58:10.984209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.633 qpair failed and we were unable to recover it. 00:36:40.633 [2024-12-16 02:58:10.984339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.984371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.984487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.984519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.984691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.984725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.984915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.984948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.985125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.985167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.985287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.985318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.985502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.985533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.985778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.985815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.986069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.986106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.986234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.986273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.986531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.986564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.986754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.986788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.986975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.987013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.987203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.987246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.987502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.987537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.987805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.987839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.988065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.988100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.988304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.988343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.988463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.988494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.988737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.988771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.988914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.988960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.989149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.989182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.989377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.989411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.989584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.989626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.989762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.989793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.989943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.989980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.990186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.990221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.990483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.990525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.990722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.990755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.990975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.991012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.991193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.991233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.991453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.991491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.991630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.991664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.991782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.991816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.992106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.992144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.992346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.992382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.992524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.992559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.992770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.992811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.993110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.634 [2024-12-16 02:58:10.993146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.634 qpair failed and we were unable to recover it. 00:36:40.634 [2024-12-16 02:58:10.993390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.993433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.993689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.993725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.993932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.993972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.994260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.994302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.994431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.994466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.994662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.994706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.994842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.994888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.995143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.995178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.995431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.995464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.995577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.995617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.995804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.995838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.996067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.996102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.996278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.996320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.996512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.996554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.996690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.996731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.996947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.996984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.997201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.997246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.997498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.997542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.997653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.997695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.997890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.997936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.998194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.998239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.998447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.998482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.998593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.998628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.998812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.998861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.999006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.999047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.999321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.999362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.999551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.999592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:10.999769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:10.999809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:11.000071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:11.000142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:11.000408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:11.000444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:11.000629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:11.000661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:11.000860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:11.000893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:11.001066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:11.001099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:11.001361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:11.001393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:11.001581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:11.001612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:11.001729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:11.001761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:11.001947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:11.001980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:11.002149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:11.002181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:11.002367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:11.002399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.635 qpair failed and we were unable to recover it. 00:36:40.635 [2024-12-16 02:58:11.002606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.635 [2024-12-16 02:58:11.002638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.002770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.002802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.003004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.003046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.003227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.003259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.003431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.003463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.003595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.003626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.003867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.003900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.004028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.004060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.004299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.004330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.004594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.004627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.004811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.004843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.005043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.005074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.005202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.005234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.005420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.005452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.005556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.005587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.005783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.005815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.005954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.005988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.006155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.006187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.006362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.006393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.006605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.006638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.006813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.006845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.007002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.007034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.007235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.007267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.007476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.007508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.007698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.007730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.007914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.007949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.008148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.008179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.008356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.008388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.008515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.008547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.008663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.008700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.008881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.008914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.009107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.009138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.009328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.009360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.009553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.009585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.009827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.009868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.010054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.010087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.010216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.010247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.010485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.010517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.010700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.010732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.010907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.636 [2024-12-16 02:58:11.010939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.636 qpair failed and we were unable to recover it. 00:36:40.636 [2024-12-16 02:58:11.011176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.011208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.011443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.011474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.011591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.011622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.011819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.011857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.011983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.012015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.012131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.012163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.012344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.012376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.012501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.012533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.012790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.012822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.013023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.013057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.013190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.013222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.013411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.013442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.013547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.013579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.013769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.013800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.014015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.014048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.014222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.014253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.014435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.014466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.014651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.014684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.014805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.014836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.014966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.014998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.015232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.015264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.015439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.015470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.015744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.015776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.015960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.015993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.016109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.016141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.016279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.016311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.016500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.016532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.016648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.016680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.016864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.016897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.017086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.017118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.017242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.017279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.017484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.017516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.017688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.017720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.017898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.017930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.018051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.018082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.637 qpair failed and we were unable to recover it. 00:36:40.637 [2024-12-16 02:58:11.018251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.637 [2024-12-16 02:58:11.018283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.018473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.018504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.018615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.018646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.018890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.018923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.019097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.019131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.019234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.019266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.019437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.019469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.019708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.019740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.019868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.019901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.020037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.020070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.020316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.020348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.020469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.020501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.020688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.020719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.020979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.021012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.021130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.021161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.021280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.021312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.021485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.021517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.021775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.021806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.022053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.022086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.022268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.022300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.022413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.022445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.022629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.022660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.022897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.022936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.023122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.023154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.023344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.023376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.023565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.023596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.023788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.023820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.024015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.024047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.024285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.024317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.024496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.024528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.024739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.024770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.024942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.024976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.025100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.025132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.025379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.025411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.025535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.025567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.025751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.025783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.026032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.026104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.026317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.026353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.026537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.026569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.026696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.638 [2024-12-16 02:58:11.026728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.638 qpair failed and we were unable to recover it. 00:36:40.638 [2024-12-16 02:58:11.027012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.027045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.027239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.027271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.027454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.027485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.027607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.027638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.027741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.027773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.027967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.027999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.028180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.028211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.028381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.028412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.028631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.028663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.028907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.028950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.029087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.029120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.029253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.029284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.029515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.029546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.029651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.029683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.029865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.029910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.030014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.030044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.030213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.030244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.030450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.030482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.030670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.030701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.030811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.030842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.031053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.031085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.031214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.031245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.031509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.031540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.031668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.031700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.031880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.031912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.032147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.032179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.032362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.032393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.032514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.032545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.032717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.032749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.032955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.032989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.033102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.033134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.033317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.033349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.033457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.033488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.033663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.033695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.033899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.033932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.034104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.034135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.034327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.034359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.034559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.034591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.034826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.639 [2024-12-16 02:58:11.034867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.639 qpair failed and we were unable to recover it. 00:36:40.639 [2024-12-16 02:58:11.035056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.035088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.035272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.035303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.035551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.035581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.035841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.035882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.036067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.036099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.036301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.036333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.036544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.036575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.036758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.036789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.036914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.036946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.037127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.037158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.037396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.037433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.037645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.037677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.037845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.037886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.038021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.038053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.038289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.038320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.038521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.038552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.038741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.038772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.038946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.038979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.039113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.039145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.039382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.039414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.039534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.039566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.039745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.039776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.040050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.040082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.040212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.040243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.040434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.040466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.040641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.040673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.040809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.040841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.041035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.041067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.041331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.041362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.041486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.041517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.041686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.041718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.041917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.041949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.042125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.042156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.042347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.042379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.042617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.042648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.042933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.042966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.043150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.043181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.043433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.043465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.043602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.640 [2024-12-16 02:58:11.043633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.640 qpair failed and we were unable to recover it. 00:36:40.640 [2024-12-16 02:58:11.043804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.043836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.044042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.044075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.044201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.044233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.044409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.044441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.044679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.044711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.044907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.044940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.045127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.045159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.045291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.045323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.045517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.045549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.045785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.045816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.045991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.046024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.046229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.046267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.046396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.046427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.046691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.046722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.047006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.047039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.047158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.047190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.047424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.047455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.047640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.047672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.047807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.047838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.048081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.048113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.048284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.048316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.048550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.048581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.048761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.048793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.048999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.049032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.049302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.049333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.049576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.049607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.049732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.049763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.049947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.049980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.050175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.050207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.050377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.050409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.050525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.050556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.050673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.050705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.050894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.050927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.641 qpair failed and we were unable to recover it. 00:36:40.641 [2024-12-16 02:58:11.051055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.641 [2024-12-16 02:58:11.051086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.051319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.051351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.051605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.051636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.051803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.051834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.052042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.052074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.052283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.052316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.052495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.052527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.052657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.052688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.052816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.052855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.053038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.053069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.053312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.053343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.053475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.053507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.053694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.053725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.053905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.053938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.054127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.054159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.054348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.054380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.054641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.054673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.054845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.054888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.055088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.055125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.055394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.055426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.055529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.055560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.055795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.055826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.056087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.056119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.056358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.056389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.056498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.056530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.056736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.056767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.056957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.056990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.057197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.057229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.057416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.057448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.057559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.057590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.057711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.057743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.057864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.057897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.058106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.058138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.058321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.058353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.058612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.058643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.058818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.058858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.059049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.059081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.059252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.059283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.059520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.059551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.642 [2024-12-16 02:58:11.059803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.642 [2024-12-16 02:58:11.059834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.642 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.060095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.060128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.060312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.060344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.060530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.060561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.060731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.060762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.060953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.060986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.061234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.061267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.061448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.061479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.061595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.061626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.061800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.061831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.062044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.062076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.062315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.062346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.062602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.062633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.062825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.062864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.063001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.063033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.063237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.063268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.063438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.063469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.063664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.063695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.063809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.063840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.063983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.064021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.064213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.064244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.064419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.064451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.064574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.064605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.064735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.064765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.064953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.064986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.065187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.065219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.065394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.065426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.065545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.065577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.065750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.065782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.065980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.066012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.066223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.066254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.066403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.066435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.066567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.066598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.066796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.066828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.067082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.067114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.067313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.067344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.067527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.067559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.067802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.067834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.068046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.068078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.643 [2024-12-16 02:58:11.068305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.643 [2024-12-16 02:58:11.068336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.643 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.068594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.068624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.068814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.068846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.069024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.069055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.069275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.069306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.069552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.069584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.069779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.069809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.070007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.070053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.070184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.070215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.070522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.070553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.070722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.070754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.070988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.071021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.071227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.071258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.071506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.071538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.071732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.071763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.071957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.071990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.072231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.072262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.072500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.072531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.072738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.072770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.072952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.072985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.073100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.073131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.073307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.073338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.073511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.073543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.073783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.073814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.074030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.074062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.074248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.074280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.074519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.074550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.074722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.074754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.074936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.074968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.075144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.075176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.075367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.075399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.075512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.075544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.075658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.075690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.075801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.075833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.076037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.076069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.076253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.076284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.076405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.076437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.076673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.076704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.076824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.644 [2024-12-16 02:58:11.076874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.644 qpair failed and we were unable to recover it. 00:36:40.644 [2024-12-16 02:58:11.077007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.077039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.077175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.077207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.077309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.077340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.077552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.077583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.077750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.077782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.077965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.077998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.078130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.078161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.078406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.078438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.078685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.078722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.078913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.078946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.079077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.079109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.079285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.079316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.079498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.079530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.079717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.079748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.079869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.079900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.080150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.080182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.080314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.080346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.080523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.080555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.080685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.080717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.080835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.080893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.081010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.081042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.081212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.081243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.081538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.081570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.081699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.081731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.081919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.081952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.082215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.082246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.082382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.082413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.082603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.082635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.082766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.082798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.083072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.083104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.083219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.083251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.083430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.083461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.083662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.083694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.083809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.083840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.083977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.084009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.084278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.084311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.084433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.084464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.084635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.084667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.645 [2024-12-16 02:58:11.084904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.645 [2024-12-16 02:58:11.084936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.645 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.085086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.085117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.085297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.085329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.085514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.085545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.085768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.085800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.085981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.086014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.086209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.086241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.086475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.086508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.086712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.086743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.086876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.086909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.087058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.087095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.087217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.087249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.087421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.087452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.087712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.087744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.087877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.087909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.088049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.088079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.088272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.088304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.088442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.088474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.088664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.088695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.088916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.088949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.089160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.089192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.089377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.089409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.089645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.089677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.089793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.089825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.089963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.089997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.090187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.090218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.090350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.090381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.090502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.090534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.090719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.090750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.090868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.090901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.091086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.091118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.091360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.091392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.091504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.091535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.091790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.091822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.646 [2024-12-16 02:58:11.092018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.646 [2024-12-16 02:58:11.092049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.646 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.092185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.092216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.092348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.092380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.092526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.092558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.092747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.092778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.092890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.092923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.093032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.093063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.093165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.093196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.093370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.093401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.093584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.093616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.093874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.093907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.094078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.094110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.094314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.094346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.094527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.094559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.094769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.094801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.095046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.095076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.095260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.095296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.095535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.095565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.095790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.095820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.095940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.095971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.096108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.096138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.096272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.096302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.096564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.096595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.096784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.096814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.096951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.096982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.097172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.097202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.097313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.097343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.097457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.097487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.097621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.097651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.097764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.097793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.097991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.098024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.098141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.098174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.098355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.098385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.098554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.098584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.098785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.098816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.099034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.099066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.099311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.099341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.099459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.099489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.099611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.099641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.647 [2024-12-16 02:58:11.099820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.647 [2024-12-16 02:58:11.099860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.647 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.099984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.100014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.100196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.100226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.100490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.100520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.100652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.100683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.100806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.100837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.100954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.100984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.101172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.101202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.101481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.101513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.101696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.101728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.101920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.101954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.102064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.102097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.102270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.102301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.102469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.102500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.102611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.102643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.102812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.102844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.103045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.103077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.103269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.103306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.103478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.103510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.103643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.103674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.103866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.103898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.104072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.104106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.104221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.104253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.104437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.104468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.104600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.104632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.104738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.104770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.104975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.105007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.105176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.105208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.105391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.105422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.105530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.105561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.105774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.105806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.105957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.105989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.106103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.106134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.106324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.106356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.106533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.106565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.106735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.106767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.106968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.107001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.107126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.107157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.107397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.107429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.648 [2024-12-16 02:58:11.107622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.648 [2024-12-16 02:58:11.107654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.648 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.107895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.107928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.108127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.108159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.108401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.108433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.108670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.108702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.108974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.109006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.109126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.109157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.109353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.109385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.109641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.109672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.109860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.109893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.110001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.110033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.110211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.110243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.110351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.110382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.110518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.110549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.110665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.110697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.110887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.110919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.111026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.111058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.111166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.111198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.111366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.111408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.111534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.111567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.111681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.111713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.111953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.111986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.112119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.112150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.112387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.112419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.112541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.112572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.112722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.112754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.112994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.113027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.113216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.113248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.113418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.113451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.113555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.113586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.113755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.113787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.113960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.113993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.114258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.114290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.114479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.114511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.114778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.114810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.115028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.115060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.115185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.115217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.115387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.115418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.115535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.115566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.115807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.649 [2024-12-16 02:58:11.115839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.649 qpair failed and we were unable to recover it. 00:36:40.649 [2024-12-16 02:58:11.116046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.116079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.116318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.116350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.116615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.116647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.116763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.116795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.116942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.116974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.117154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.117186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.117361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.117392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.117586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.117618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.117744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.117776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.118017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.118056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.118250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.118282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.118517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.118550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.118664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.118696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.118886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.118919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.119110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.119142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.119399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.119431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.119678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.119709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.119841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.119883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.120074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.120112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.120230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.120261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.120434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.120466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.120730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.120762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.120940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.120973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.121155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.121186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.121323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.121356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.121615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.121648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.121766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.121798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.121929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.121962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.122135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.122167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.122275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.122306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.122414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.122446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.122626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.122659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.122779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.122811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.123061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.123093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.650 [2024-12-16 02:58:11.123203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.650 [2024-12-16 02:58:11.123236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.650 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.123418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.123450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.123691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.123722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.123862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.123896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.124106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.124139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.124385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.124417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.124628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.124658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.124789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.124821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.124944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.124977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.125101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.125133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.125404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.125436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.125615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.125648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.125770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.125803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.125999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.126033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.126171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.126203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.126398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.126429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.126622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.126653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.126859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.126893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.127072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.127104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.127274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.127306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.127479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.127512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.127682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.127713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.127831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.127872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.128043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.128076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.128264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.128301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.128422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.128454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.128697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.128729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.128886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.128919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.129182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.129213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.129393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.129424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.129601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.129633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.129768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.129799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.129916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.129948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.130135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.130165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.130346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.130377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.130577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.130608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.130872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.130904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.131084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.131115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.131341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.131372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.651 [2024-12-16 02:58:11.131500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.651 [2024-12-16 02:58:11.131532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.651 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.131719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.131751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.131920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.131953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.132141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.132173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.132291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.132323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.132510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.132542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.132708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.132739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.132934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.132967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.133151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.133183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.133297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.133329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.133569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.133600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.133803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.133834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.133977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.134012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.134143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.134174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.134359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.134390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.134560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.134592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.134768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.134800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.134991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.135024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.135197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.135229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.135409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.135440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.135612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.135644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.135760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.135791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.135914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.135946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.136113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.136144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.136337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.136368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.136491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.136528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.136758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.136789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.136934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.136967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.137071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.137101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.137289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.137320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.137500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.137531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.137714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.137746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.137918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.137951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.138135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.138166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.138285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.138316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.138527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.138559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.138768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.138799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.138994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.139026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.139158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.139190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.652 [2024-12-16 02:58:11.139505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.652 [2024-12-16 02:58:11.139537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.652 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.139650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.139682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.139857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.139889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.140009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.140040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.140173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.140204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.140344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.140375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.140569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.140601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.140718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.140750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.140923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.140955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.141197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.141229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.141411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.141442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.141552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.141583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.141793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.141824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.142050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.142125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.142339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.142376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.142600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.142643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.142831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.142883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.143071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.143103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.143356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.143392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.143598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.143632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.143820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.143869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.144117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.144152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.144365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.144403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.144701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.144734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.144926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.144968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.145120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.145153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.145390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.145430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.145560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.145594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.145725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.145756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.145928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.145961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.146146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.146181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.146443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.146474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.146601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.146636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.146756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.146788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.146917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.146951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.147141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.147179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.147382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.147414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.147523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.147554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.147676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.147714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.653 [2024-12-16 02:58:11.147860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.653 [2024-12-16 02:58:11.147893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.653 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.148069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.148101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.148287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.148320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.148557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.148588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.148776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.148810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.149016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.149050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.149233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.149275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.149469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.149503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.149633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.149664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.149844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.149896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.150019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.150051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.150285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.150320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.150561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.150593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.150787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.150827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.151036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.151069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.151313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.151345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.151519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.151550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.151676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.151708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.151952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.151985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.152156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.152188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.152379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.152410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.152512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.152543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.152669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.152702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.152801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.152832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.153057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.153089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.153277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.153309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.153517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.153548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.153731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.153768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.154034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.154067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.154244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.154276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.154487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.154519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.154758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.154789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.154981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.155014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.155198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.155229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.155414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.155446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.155611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.155642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.155903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.155936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.156067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.156098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.156225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.156257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.156449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.654 [2024-12-16 02:58:11.156480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.654 qpair failed and we were unable to recover it. 00:36:40.654 [2024-12-16 02:58:11.156720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.156752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.156874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.156906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.157091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.157123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.157306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.157337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.157461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.157493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.157662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.157694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.157929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.157962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.158201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.158233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.158418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.158450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.158705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.158737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.158930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.158963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.159166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.159198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.159314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.159346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.159484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.159516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.159689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.159758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.160050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.160093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.160273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.160314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.160534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.160572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.160721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.160756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.160878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.160916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.161106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.161142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.161263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.161295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.161554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.161585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.161777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.161809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.162083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.162116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.162284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.162316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.162500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.162532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.162710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.162747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.162932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.162965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.163067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.163098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.163356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.163388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.163577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.163609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.163738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.163769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.163909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.163942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.164081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.655 [2024-12-16 02:58:11.164113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.655 qpair failed and we were unable to recover it. 00:36:40.655 [2024-12-16 02:58:11.164284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.164315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.164581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.164613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.164716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.164748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.164950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.164982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.165174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.165205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.165347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.165378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.165501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.165532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.165822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.165863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.166039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.166071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.166189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.166220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.166480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.166512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.166615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.166646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.166755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.166787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.166987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.167019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.167282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.167314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.167514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.167545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.167719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.167750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.167986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.168019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.168225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.168257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.168510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.168581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.168845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.168905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.169021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.169053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.169244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.169275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.169391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.169423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.169555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.169587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.169765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.169796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.169928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.169961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.170140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.170172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.170354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.170386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.170622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.170653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.170892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.170925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.171040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.171072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.171191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.171223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.171403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.171436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.171619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.171650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.171781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.171812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.172064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.172096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.172212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.172244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.172365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.656 [2024-12-16 02:58:11.172396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.656 qpair failed and we were unable to recover it. 00:36:40.656 [2024-12-16 02:58:11.172507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.172537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.172776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.172808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.173014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.173047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.173169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.173200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.173375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.173407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.173585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.173617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.173788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.173820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.174014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.174055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.174232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.174264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.174449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.174481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.174616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.174647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.174860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.174892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.175007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.175038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.175156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.175188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.175372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.175403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.175604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.175635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.175894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.175928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.176123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.176155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.176268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.176299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.176409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.176440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.176643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.176675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.176807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.176839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.177018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.177050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.177310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.177341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.177533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.177565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.177679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.177711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.177908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.177940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.178122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.178154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.178259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.178291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.178462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.178493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.178616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.178647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.178817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.178856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.178997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.179029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.179216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.179248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.179498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.179533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.179651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.179682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.179857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.179890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.180001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.180033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.180226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.657 [2024-12-16 02:58:11.180257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.657 qpair failed and we were unable to recover it. 00:36:40.657 [2024-12-16 02:58:11.180434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.180465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.180582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.180614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.180786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.180817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.181031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.181063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.181324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.181355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.181529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.181560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.181794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.181825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.182025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.182057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.182229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.182259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.182383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.182414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.182656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.182687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.182868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.182901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.183153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.183185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.183374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.183405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.183615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.183646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.183918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.183951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.184161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.184192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.184384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.184416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.184551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.184582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.184770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.184801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.184932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.184965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.185166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.185197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.185459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.185495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.185670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.185700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.185946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.185978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.186177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.186208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.186312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.186344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.186535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.186565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.186803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.186835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.187020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.187052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.187176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.187207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.187403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.187434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.187644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.187675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.187858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.187890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.188084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.188116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.188238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.188268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.188534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.188566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.188752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.188783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.188896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.188928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.189111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.658 [2024-12-16 02:58:11.189142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.658 qpair failed and we were unable to recover it. 00:36:40.658 [2024-12-16 02:58:11.189314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.189345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.189603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.189635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.189814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.189860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.190034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.190066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.190265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.190296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.190539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.190570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.190759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.190789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.190999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.191032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.191203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.191234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.191426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.191469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.191678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.191709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.191844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.191883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.192066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.192096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.192294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.192326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.192541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.192573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.192767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.192798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.192932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.192968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.193161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.193193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.193431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.193463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.193647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.193677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.193862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.193895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.194068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.194100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.194218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.194249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.194531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.194597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.194744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.194791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.194997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.195033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.195212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.195244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.195430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.195465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.195642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.195674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.195789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.195821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.196018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.196054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.196294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.196326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.196524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.196560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.196735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.196766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.196981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.659 [2024-12-16 02:58:11.197014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.659 qpair failed and we were unable to recover it. 00:36:40.659 [2024-12-16 02:58:11.197215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.197251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.197488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.197527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.197786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.197817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.198026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.198061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.198246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.198277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.198471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.198501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.198692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.198723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.198918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.198951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.199125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.199156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.199331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.199362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.199539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.199570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.199740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.199771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.200031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.200064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.200247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.200277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.200563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.200594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.200735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.200776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.200988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.201026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.201208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.201240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.201451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.201494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.201690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.201719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.201923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.201957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.202151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.202186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.202427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.202459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.202754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.202789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.203045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.203078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.203263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.203299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.203484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.203515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.203651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.203685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.203903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.203946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.204077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.204108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.204361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.204395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.204590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.204621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.204826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.204885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.205072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.205102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.205233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.205265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.205477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.205512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.205697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.205728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.205831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.205890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.206073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.660 [2024-12-16 02:58:11.206104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.660 qpair failed and we were unable to recover it. 00:36:40.660 [2024-12-16 02:58:11.206246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.206277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.206544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.206578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.206709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.206740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.206929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.206969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.207168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.207202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.207475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.207518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.207725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.207760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.208024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.208067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.208257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.208291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.208411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.208443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.208624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.208658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.208868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.208903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.209086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.209118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.209245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.209276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.209397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.209427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.209668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.209698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.209881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.209914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.210103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.210134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.210373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.210404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.210647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.210678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.210787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.210818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.211018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.211055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.211310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.211342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.211581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.211612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.211798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.211829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.212031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.212062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.212254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.212284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.212455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.212486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.212655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.212686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.212803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.212834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.213050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.213083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.213217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.213247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.213368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.213399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.213512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.213543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.213657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.213690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.213950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.213984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.214113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.214144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.214329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.214361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.214535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.214566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.661 qpair failed and we were unable to recover it. 00:36:40.661 [2024-12-16 02:58:11.214764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.661 [2024-12-16 02:58:11.214795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.214978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.215010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.215180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.215211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.215399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.215430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.215597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.215634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.215874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.215907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.216171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.216201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.216330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.216361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.216536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.216567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.216807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.216837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.217109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.217141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.217314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.217345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.217610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.217641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.217769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.217800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.218005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.218039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.218207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.218238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.218478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.218509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.218684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.218716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.218857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.218890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.219022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.219054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.219294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.219329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.219593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.219625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.219796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.219828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.219955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.219988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.220227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.220259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.220451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.220483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.220674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.220705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.220877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.220910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.221085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.221117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.221227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.221258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.221519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.221551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.221723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.221760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.221876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.221909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.222095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.222126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.222239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.222270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.222470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.222502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.222669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.222700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.222963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.222997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.223178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.223209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.223329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.662 [2024-12-16 02:58:11.223360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.662 qpair failed and we were unable to recover it. 00:36:40.662 [2024-12-16 02:58:11.223467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.223499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.223692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.223723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.223938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.223971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.224160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.224192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.224391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.224422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.224619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.224652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.224825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.224878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.224998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.225030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.225209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.225241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.225341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.225373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.225562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.225593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.225770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.225801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.226014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.226048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.226308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.226340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.226588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.226620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.226863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.226896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.227071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.227102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.227272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.227304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.227566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.227598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.227867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.227901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.228037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.228069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.228206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.228238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.228365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.228397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.228601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.228632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.228835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.228887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.229022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.229053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.229319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.229351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.229534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.229565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.229772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.229803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.230050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.230084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.230321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.230352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.230524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.230555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.230759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.230798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.231049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.231082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.231208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.231240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.231462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.663 [2024-12-16 02:58:11.231495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.663 qpair failed and we were unable to recover it. 00:36:40.663 [2024-12-16 02:58:11.231621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.231653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.231794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.231825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.232006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.232039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.232229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.232261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.232374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.232405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.232604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.232636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.232812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.232844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.232973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.233005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.233175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.233207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.233331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.233362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.233562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.233594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.233712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.233744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.233872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.233905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.234085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.234117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.234353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.234385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.234590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.234622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.234804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.234836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.234964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.234996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.235168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.235200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.235307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.235338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.235442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.235473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.235653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.235684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.235874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.235907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.236023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.236060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.664 qpair failed and we were unable to recover it. 00:36:40.664 [2024-12-16 02:58:11.236231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.664 [2024-12-16 02:58:11.236263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.236496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.236528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.236791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.236822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.237033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.237066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.237276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.237307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.237488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.237519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.237714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.237745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.238011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.238044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.238296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.238328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.238628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.238660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.238902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.238934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.239066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.239098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.239344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.239376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.239487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.239520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.239728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.239760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.240002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.240035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.240226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.240258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.240502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.240533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.240799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.240830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.240978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.241011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.241264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.241295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.241411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.241443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.241611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.241642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.241812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.241843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.241958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.241990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.242166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.242198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.242329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.242361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.242543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.242576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.242699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.242731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.242980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.243014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.243136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.243167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.243335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.243367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.243570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.243602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.243863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.243896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.244143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.244174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.665 qpair failed and we were unable to recover it. 00:36:40.665 [2024-12-16 02:58:11.244428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.665 [2024-12-16 02:58:11.244459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.244730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.244762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.244948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.244980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.245240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.245272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.245386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.245418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.245621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.245663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.245916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.245950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.246197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.246230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.246417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.246449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.246670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.246702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.246876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.246909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.247181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.247212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.247337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.247369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.247573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.247605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.247808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.247840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.248100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.248132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.248389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.248421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.248603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.248635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.248813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.248844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.249003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.249036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.249213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.249244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.249363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.249395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.249655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.249687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.249897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.249929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.250168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.250199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.250388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.250420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.250611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.250643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.250894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.250928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.251174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.251206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.251321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.251352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.251524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.251555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.251681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.251713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.251893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.251925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.252049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.252081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.252271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.252303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.252491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.252523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.252761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.252793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.253046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.253079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.253314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.253346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.253482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.666 [2024-12-16 02:58:11.253513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.666 qpair failed and we were unable to recover it. 00:36:40.666 [2024-12-16 02:58:11.253688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.253720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.253980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.254013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.254122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.254153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.254325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.254357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.254560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.254592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.254778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.254810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.255065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.255099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.255212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.255243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.255372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.255404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.255595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.255626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.255758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.255790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.255991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.256025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.256226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.256257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.256524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.256555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.256681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.256713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.256856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.256889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.257063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.257094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.257360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.257391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.257571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.257604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.257796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.257827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.258092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.258125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.258319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.258350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.258542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.258574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.258837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.258879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.259053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.259085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.259280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.259312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.259451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.259483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.259739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.259771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.259967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.260000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.260171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.260203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.260484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.260516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.260639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.260671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.260887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.260920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.261117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.261156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.261292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.261324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.261503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.261535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.261744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.261776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.261959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.261991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.262167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.262198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.667 [2024-12-16 02:58:11.262376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.667 [2024-12-16 02:58:11.262408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.667 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.262589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.262621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.262840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.262880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.263119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.263151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.263400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.263432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.263567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.263598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.263771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.263803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.263992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.264025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.264231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.264264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.264393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.264425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.264605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.264637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.264826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.264867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.264990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.265021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.265147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.265179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.265352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.265384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.265567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.265598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.265870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.265904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.668 qpair failed and we were unable to recover it. 00:36:40.668 [2024-12-16 02:58:11.266033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.668 [2024-12-16 02:58:11.266065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.948 qpair failed and we were unable to recover it. 00:36:40.948 [2024-12-16 02:58:11.266203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.948 [2024-12-16 02:58:11.266236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.948 qpair failed and we were unable to recover it. 00:36:40.948 [2024-12-16 02:58:11.266412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.948 [2024-12-16 02:58:11.266444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.948 qpair failed and we were unable to recover it. 00:36:40.948 [2024-12-16 02:58:11.266548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.948 [2024-12-16 02:58:11.266580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.948 qpair failed and we were unable to recover it. 00:36:40.948 [2024-12-16 02:58:11.266788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.948 [2024-12-16 02:58:11.266819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.948 qpair failed and we were unable to recover it. 00:36:40.948 [2024-12-16 02:58:11.267031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.948 [2024-12-16 02:58:11.267064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.948 qpair failed and we were unable to recover it. 00:36:40.948 [2024-12-16 02:58:11.267241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.948 [2024-12-16 02:58:11.267273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.948 qpair failed and we were unable to recover it. 00:36:40.948 [2024-12-16 02:58:11.267506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.948 [2024-12-16 02:58:11.267538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.948 qpair failed and we were unable to recover it. 00:36:40.948 [2024-12-16 02:58:11.267768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.948 [2024-12-16 02:58:11.267799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.948 qpair failed and we were unable to recover it. 00:36:40.948 [2024-12-16 02:58:11.267993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.948 [2024-12-16 02:58:11.268025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.948 qpair failed and we were unable to recover it. 00:36:40.948 [2024-12-16 02:58:11.268205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.268237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.268359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.268391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.268632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.268663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.268866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.268898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.269108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.269140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.269427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.269458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.269660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.269692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.269813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.269845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.270056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.270089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.270306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.270338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.270442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.270473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.270592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.270623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.270758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.270789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.271034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.271067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.271256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.271287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.271469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.271500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.271673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.271704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.271887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.271920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.272121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.272152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.272413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.272444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.272568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.272599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.272723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.272755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.273028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.273061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.273238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.273270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.273438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.273470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.273655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.273687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.273924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.273957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.274087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.274119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.274256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.274288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.274482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.274513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.274616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.949 [2024-12-16 02:58:11.274648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.949 qpair failed and we were unable to recover it. 00:36:40.949 [2024-12-16 02:58:11.274831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.274871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.275041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.275073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.275190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.275222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.275396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.275427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.275712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.275749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.275993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.276026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.276213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.276244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.276428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.276460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.276578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.276610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.276793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.276823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.276943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.276976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.277099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.277130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.277396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.277427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.277612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.277644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.277775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.277807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.277931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.277963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.278080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.278111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.278281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.278312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.278448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.278480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.278682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.278714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.278836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.278878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.279081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.279113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.279297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.279329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.279515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.279546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.279693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.279725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.279842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.279886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.280145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.280176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.280294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.280326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.280584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.950 [2024-12-16 02:58:11.280616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.950 qpair failed and we were unable to recover it. 00:36:40.950 [2024-12-16 02:58:11.280742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.280773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.280894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.280928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.281101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.281132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.281376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.281408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.281644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.281675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.281866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.281899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.282168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.282200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.282384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.282415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.282673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.282705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.282940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.282972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.283211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.283243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.283535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.283566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.283706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.283738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.283844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.283882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.284064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.284096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.284291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.284323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.284516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.284553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.284723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.284754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.284947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.284981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.285231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.285263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.285449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.285481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.285666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.285699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.285946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.285979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.286102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.286133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.286314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.286345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.286531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.286563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.286770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.286801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.287071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.287103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.951 qpair failed and we were unable to recover it. 00:36:40.951 [2024-12-16 02:58:11.287364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.951 [2024-12-16 02:58:11.287395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.287606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.287637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.287773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.287805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.287993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.288026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.288154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.288185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.288358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.288390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.288582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.288613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.288797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.288829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.289042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.289075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.289313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.289344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.289603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.289634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.289766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.289798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.289991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.290024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.290156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.290188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.290391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.290423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.290658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.290700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.290880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.290913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.291085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.291117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.291287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.291318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.291489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.291520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.291755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.291787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.291974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.292007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.292175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.292207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.292413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.292445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.292678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.292709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.292913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.292946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.293067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.293098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.293319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.293351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.293540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.293572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.293843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.293887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.294174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.294206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.294448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.294479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.294719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.294750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.294960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.294992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.295250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.295282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.295408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.295439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.295651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.295682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.295944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.295977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.952 qpair failed and we were unable to recover it. 00:36:40.952 [2024-12-16 02:58:11.296241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.952 [2024-12-16 02:58:11.296272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.296472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.296504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.296610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.296642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.296905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.296938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.297062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.297094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.297270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.297302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.297428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.297459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.297638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.297670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.297839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.297880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.298010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.298042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.298228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.298259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.298444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.298475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.298601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.298632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.298913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.298946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.299057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.299088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.299281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.299312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.299578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.299610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.299822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.299862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.300061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.300098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.300333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.300365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.300482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.300513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.300624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.300655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.300893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.300926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.301109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.301140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.301343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.301374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.301504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.301535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.301723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.301754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.301955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.301988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.302164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.302196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.302384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.302416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.302532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.302563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.302799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.302831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.303030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.303062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.303192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.303223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.303414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.303446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.303682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.303713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.953 qpair failed and we were unable to recover it. 00:36:40.953 [2024-12-16 02:58:11.303891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.953 [2024-12-16 02:58:11.303924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.304026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.304057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.304294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.304325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.304459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.304491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.304675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.304706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.304917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.304951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.305081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.305113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.305236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.305267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.305382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.305414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.305620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.305658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.305839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.305879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.306118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.306150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.306267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.306299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.306415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.306447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.306649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.306680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.306883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.306915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.307098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.307130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.307308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.307339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.307458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.307489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.307724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.307755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.307925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.307958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.308072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.308103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.308354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.308385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.308593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.308626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.308807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.308838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.308982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.309015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.309185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.309216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.309503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.309534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.309704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.309736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.309840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.309882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.309996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.310028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.310267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.310299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.310448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.310479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.310718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.310749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.310933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.310966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.311139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.311171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.954 [2024-12-16 02:58:11.311356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.954 [2024-12-16 02:58:11.311387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.954 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.311609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.311641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.311755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.311787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.311903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.311935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.312144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.312175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.312291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.312323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.312507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.312539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.312651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.312683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.312891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.312925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.313189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.313221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.313458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.313490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.313661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.313692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.313881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.313914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.314032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.314063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.314319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.314357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.314613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.314645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.314834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.314876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.315006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.315038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.315223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.315255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.315367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.315398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.315586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.315617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.315804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.315835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.315978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.316010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.316112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.316143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.316329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.316361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.316575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.316606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.316796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.316827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.317052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.317085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.317346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.317378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.317494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.317525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.317647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.317679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.317794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.317826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.317958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.317990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.318176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.318207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.318476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.318507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.318613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.318644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.318822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.318862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.319056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.319088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.319220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.319263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.319450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.319481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.955 qpair failed and we were unable to recover it. 00:36:40.955 [2024-12-16 02:58:11.319611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.955 [2024-12-16 02:58:11.319643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.319817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.319863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.320053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.320085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.320346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.320378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.320568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.320599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.320715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.320746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.320989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.321023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.321216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.321247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.321487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.321518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.321764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.321796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.321979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.322011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.322223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.322253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.322475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.322507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.322762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.322793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.322918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.322950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.323204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.323236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.323521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.323552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.323829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.323868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.324055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.324087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.324201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.324233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.324439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.324469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.324659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.324690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.324928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.324961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.325080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.325111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.325243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.325275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.325512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.325542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.325713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.325745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.325915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.325949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.326060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.326090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.326357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.326389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.326523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.326555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.326754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.326786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.326980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.327013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.327133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.327163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.327353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.327384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.327511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.327543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.327806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.327838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.327965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.327997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.328122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.328154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.328392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.956 [2024-12-16 02:58:11.328424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.956 qpair failed and we were unable to recover it. 00:36:40.956 [2024-12-16 02:58:11.328618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.328650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.328869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.328901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.329015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.329052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.329240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.329271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.329508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.329540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.329670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.329702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.329983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.330016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.330193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.330224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.330340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.330371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.330501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.330532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.330661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.330691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.330817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.330855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.331047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.331079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.331193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.331224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.331330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.331360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.331534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.331566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.331695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.331727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.331900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.331933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.332190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.332221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.332323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.332355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.332466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.332497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.332704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.332735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.332845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.332906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.333168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.333199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.333332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.333363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.333548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.333580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.333782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.333813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.333963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.333996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.334209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.334240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.334475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.334506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.334725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.334756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.334882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.334916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.335041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.335073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.335200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.335231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.335477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.335508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.335631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.335662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.335838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.335877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.336064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.336095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.336226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.957 [2024-12-16 02:58:11.336259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.957 qpair failed and we were unable to recover it. 00:36:40.957 [2024-12-16 02:58:11.336527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.336558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.336765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.336796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.337041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.337073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.337202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.337234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.337423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.337454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.337585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.337617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.337808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.337839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.338054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.338084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.338346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.338377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.338563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.338595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.338722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.338753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.338948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.338980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.339115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.339147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.339331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.339362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.339491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.339522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.339654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.339686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.339804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.339834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.340087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.340122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.340389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.340422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.340542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.340577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.340721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.340750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.340927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.340962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.341159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.341193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.341410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.341444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.341707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.341738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.958 [2024-12-16 02:58:11.341957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.958 [2024-12-16 02:58:11.341990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.958 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.342254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.342286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.342469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.342501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.342771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.342803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.342924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.342956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.343142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.343173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.343292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.343330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.343503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.343535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.343756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.343788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.344032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.344065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.344278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.344310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.344412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.344443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.344706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.344738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.344917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.344951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.345144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.345176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.345360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.345391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.345640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.345671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.345790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.345821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.346046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.346078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.346253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.346284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.346480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.346512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.346748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.346779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.346965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.346998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.347216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.347247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.347421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.347453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.347641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.347673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.347886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.347919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.348193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.348224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.348402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.348434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.959 [2024-12-16 02:58:11.348557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.959 [2024-12-16 02:58:11.348589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.959 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.348703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.348735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.348909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.348942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.349130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.349161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.349345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.349378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.349502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.349535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.349793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.349825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.350026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.350058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.350249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.350283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.350410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.350442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.350634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.350666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.350801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.350833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.351032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.351064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.351237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.351269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.351455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.351487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.351748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.351780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.351912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.351946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.352064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.352096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.352218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.352255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.352446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.352478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.352664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.352695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.352882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.352915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.353100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.353131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.353316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.353347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.353516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.353547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.353726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.353757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.353963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.353996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.354129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.354159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.354280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.354312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.960 [2024-12-16 02:58:11.354483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.960 [2024-12-16 02:58:11.354514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.960 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.354697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.354728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.354967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.355001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.355177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.355209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.355495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.355525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.355641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.355673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.355864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.355898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.356028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.356059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.356248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.356279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.356449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.356480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.356666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.356697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.356885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.356919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.357100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.357131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.357315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.357346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.357469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.357502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.357671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.357701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.357868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.357906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.358166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.358198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.358369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.358399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.358584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.358615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.358793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.358824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.359024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.359056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.359158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.359189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.359374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.359406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.359520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.359551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.359653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.359684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.359898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.359931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.360104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.360136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.360312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.360342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.360480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.360512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.360731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.360762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.360951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.360985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.361201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.361233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.361349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.361380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.361560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.361591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.961 qpair failed and we were unable to recover it. 00:36:40.961 [2024-12-16 02:58:11.361791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.961 [2024-12-16 02:58:11.361823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.362015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.362047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.362309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.362339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.362448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.362482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.362586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.362616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.362738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.362767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.363000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.363034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.363274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.363306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.363427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.363457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.363685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.363717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.363953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.363986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.364174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.364205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.364417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.364448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.364663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.364693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.364824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.364868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.365001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.365033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.365216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.365247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.365518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.365550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.365793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.365824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.365970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.366002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.366106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.366136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.366320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.366353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.366565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.366602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.366843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.366895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.367091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.367123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.367296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.367327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.367430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.367460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.367567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.367601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.367796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.367829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.368047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.368080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.368297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.368328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.368507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.368538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.368818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.368858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.368983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.369016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.369276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.369307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.369517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.369548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.369801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.369833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.369958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.369991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.370096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.370126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.962 [2024-12-16 02:58:11.370369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.962 [2024-12-16 02:58:11.370401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.962 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.370580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.370611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.370791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.370822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.371027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.371059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.371235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.371266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.371373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.371404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.371527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.371558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.371688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.371720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.371899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.371932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.372045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.372078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.372247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.372283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.372384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.372417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.372613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.372645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.372831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.372872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.373045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.373076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.373315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.373346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.373531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.373563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.373672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.373703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.373892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.373924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.374045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.374077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.374314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.374345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.374606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.374636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.374770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.374803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.374989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.375021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.375214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.375246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.375442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.375474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.375593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.375624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.375821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.375861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.376051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.376084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.376218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.376249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.376443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.376475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.376677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.376709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.376834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.376877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.377113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.377145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.963 [2024-12-16 02:58:11.377280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.963 [2024-12-16 02:58:11.377311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.963 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.377438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.377469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.377728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.377759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.377975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.378008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.378204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.378235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.378407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.378437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.378705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.378737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.378936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.378969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.379226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.379257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.379475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.379506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.379747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.379777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.379952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.379985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.380188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.380220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.380463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.380493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.380690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.380721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.380984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.381017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.381257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.381288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.381480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.381517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.381710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.381742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.381926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.381959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.382171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.382202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.382393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.382426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.382527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.382558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.382802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.382833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.382970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.383002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.383127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.383159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.383290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.383322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.383616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.383647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.383896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.383929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.384061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.384092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.384348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.384379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.384518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.384549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.384668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.384699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.384894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.384927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.385113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.385144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.385331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.385362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.385612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.385643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.385827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.385867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.386068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.386100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.386348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.964 [2024-12-16 02:58:11.386379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.964 qpair failed and we were unable to recover it. 00:36:40.964 [2024-12-16 02:58:11.386497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.386528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.386754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.386786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.386994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.387026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.387218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.387249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.387498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.387534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.387732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.387764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.387952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.387986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.388181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.388212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.388330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.388362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.388489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.388520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.388655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.388685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.388899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.388933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.389123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.389155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.389268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.389300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.389518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.389549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.389785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.389816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.390060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.390093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.390205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.390237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.390437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.390475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.390690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.390721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.390831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.390872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.391108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.391140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.391259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.391290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.391507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.391539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.391671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.391702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.391914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.391948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.392122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.392153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.392423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.392454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.392710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.392742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.392998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.393032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.393204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.393235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.393495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.393526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.393719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.393751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.393879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.393912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.394097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.394128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.394419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.394451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.394591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.394622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.394736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.394768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.394970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.395002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.965 qpair failed and we were unable to recover it. 00:36:40.965 [2024-12-16 02:58:11.395242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.965 [2024-12-16 02:58:11.395272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.395456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.395488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.395766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.395797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.395975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.396007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.396143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.396174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.396358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.396390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.396522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.396559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.396740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.396772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.396989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.397021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.397216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.397247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.397427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.397459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.397575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.397609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.397780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.397810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.398004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.398037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.398223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.398254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.398433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.398465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.398581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.398611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.398810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.398842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.399042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.399074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.399266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.399298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.399497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.399529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.399718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.399750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.399954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.399987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.400155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.400187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.400393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.400425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.400528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.400559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.400787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.400819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.401094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.401126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.401329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.401361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.401463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.401495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.401783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.401814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.402038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.402071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.402185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.402216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.402348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.402380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.402508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.402540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.402677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.402710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.402885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.402919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.403099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.403130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.403322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.403353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.966 qpair failed and we were unable to recover it. 00:36:40.966 [2024-12-16 02:58:11.403491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.966 [2024-12-16 02:58:11.403523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.403634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.403665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.403837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.403888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.404086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.404118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.404289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.404321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.404496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.404527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.404795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.404826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.405025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.405056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.405332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.405402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.405643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.405680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.405873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.405909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.406126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.406158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.406297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.406330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.406505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.406536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.406730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.406762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.406954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.406988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.407123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.407155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.407324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.407355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.407532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.407564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.407778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.407810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.408013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.408045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.408225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.408266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.408438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.408470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.408651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.408682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.408864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.408897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.409038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.409069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.409186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.409218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.409415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.409447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.409554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.409586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.409834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.409875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.410063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.410095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.410205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.410237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.410428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.410459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.410700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.410733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.410919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.410952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.411150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.411182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.411440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.411472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.967 qpair failed and we were unable to recover it. 00:36:40.967 [2024-12-16 02:58:11.411670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.967 [2024-12-16 02:58:11.411702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.411917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.411950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.412144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.412176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.412306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.412338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.412580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.412611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.412726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.412758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.413027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.413059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.413322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.413353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.413542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.413573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.413702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.413733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.413995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.414028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.414156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.414188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.414360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.414390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.414671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.414702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.414891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.414924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.415190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.415220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.415508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.415540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.415675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.415707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.415947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.415980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.416160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.416192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.416371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.416403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.416645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.416675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.416860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.416892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.417100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.417132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.417271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.417303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.417497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.417529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.417729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.417761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.968 [2024-12-16 02:58:11.417935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.968 [2024-12-16 02:58:11.417967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.968 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.418101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.418132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.418433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.418464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.418656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.418687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.418799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.418831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.419083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.419115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.419367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.419399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.419586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.419616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.419792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.419824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.420022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.420055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.420176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.420207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.420404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.420437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.420634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.420666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.420912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.420945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.421186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.421218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.421513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.421544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.421725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.421756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.421889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.421922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.422160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.422192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.422306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.422337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.422601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.422633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.422889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.422922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.423185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.423218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.423403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.423434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.423675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.423712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.423857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.423890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.424160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.424192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.424451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.424482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.424605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.424636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.424857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.424890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.425069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.425100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.425231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.425262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.425506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.425538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.425781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.425813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.426087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.426118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.426360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.426392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.426599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.426631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.426888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.426921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.427115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.969 [2024-12-16 02:58:11.427148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.969 qpair failed and we were unable to recover it. 00:36:40.969 [2024-12-16 02:58:11.427265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.427296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.427471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.427502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.427622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.427655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.427894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.427926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.428109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.428140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.428385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.428417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.428591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.428622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.428814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.428845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.429068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.429100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.429226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.429257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.429445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.429476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.429651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.429683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.429881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.429920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.430105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.430136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.430270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.430302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.430542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.430573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.430688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.430719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.430906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.430940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.431182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.431213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.431479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.431511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.431724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.431756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.431944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.431983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.432185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.432216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.432413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.432445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.432584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.432615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.432899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.432938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.433202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.433235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.433425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.433456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.433569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.433600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.433845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.433885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.434086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.434117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.434334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.434365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.434552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.434585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.434695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.434726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.434885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.434918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.435188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.435220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.435424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.435455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.435646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.435677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.435872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.970 [2024-12-16 02:58:11.435905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.970 qpair failed and we were unable to recover it. 00:36:40.970 [2024-12-16 02:58:11.436114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.436146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.436272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.436303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.436508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.436540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.436781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.436812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.436997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.437028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.437221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.437252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.437531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.437563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.437745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.437776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.437962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.437994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.438270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.438302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.438491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.438523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.438773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.438805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.439097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.439130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.439349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.439381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.439671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.439703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.439889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.439922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.440105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.440136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.440384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.440415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.440518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.440548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.440724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.440753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.440941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.440972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.441158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.441189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.441322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.441353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.441542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.441575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.441753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.441785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.441988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.442020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.442203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.442242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.442419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.442452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.442639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.442670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.442931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.442964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.443081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.443115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.443237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.443269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.443529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.443563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.443799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.443830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.443964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.443995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.444179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.444211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.444401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.444432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.444609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.444640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.444762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.971 [2024-12-16 02:58:11.444794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.971 qpair failed and we were unable to recover it. 00:36:40.971 [2024-12-16 02:58:11.444924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.444957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.445230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.445262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.445434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.445466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.445581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.445612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.445788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.445818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.446046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.446081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.446258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.446289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.446463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.446495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.446687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.446719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.446954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.446986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.447165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.447196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.447378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.447411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.447531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.447563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.447665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.447696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.447889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.447924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.448137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.448168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.448353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.448385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.448520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.448553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.448659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.448694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.448977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.449010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.449203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.449236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.449364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.449395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.449526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.449558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.449829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.449868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.450050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.450081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.450343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.450374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.450499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.450532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.450768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.450805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.451012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.451045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.451232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.451263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.451455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.451486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.451701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.451733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.451902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.451935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.452159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.452191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.452458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.452490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.452690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.452721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.452927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.452959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.972 [2024-12-16 02:58:11.453147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.972 [2024-12-16 02:58:11.453179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.972 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.453442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.453474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.453663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.453694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.453895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.453929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.454120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.454152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.454265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.454297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.454502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.454535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.454775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.454806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.454965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.454998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.455194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.455226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.455348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.455380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.455492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.455524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.455800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.455833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.455983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.456016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.456217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.456248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.456454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.456486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.456654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.456685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.456874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.456908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.457119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.457151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.457324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.457355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.457556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.457587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.457800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.457832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.458010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.458042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.458250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.458281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.458422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.458455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.458645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.458677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.458883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.458916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.459033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.459066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.459239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.459270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.459539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.459571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.459754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.459791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.459968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.460000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.460110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.460143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.460324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.460363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.973 [2024-12-16 02:58:11.460479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.973 [2024-12-16 02:58:11.460510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.973 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.460639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.460673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.460865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.460898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.461081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.461113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.461324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.461357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.461566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.461597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.461765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.461796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.462071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.462108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.462284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.462316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.462494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.462527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.462655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.462687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.462863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.462896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.463078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.463109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.463235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.463274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.463466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.463497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.463600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.463630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.463760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.463794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.464068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.464101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.464301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.464333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.464461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.464492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.464758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.464789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.465001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.465034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.465219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.465250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.465428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.465460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.465686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.465718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.465982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.466015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.466217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.466249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.466438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.466470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.466657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.466688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.466817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.466866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.467064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.467097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.467280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.467311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.467485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.467517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.467731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.467763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.467935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.467967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.468141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.468172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.468389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.468428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.468619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.468651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.468823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.468873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.974 qpair failed and we were unable to recover it. 00:36:40.974 [2024-12-16 02:58:11.469014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.974 [2024-12-16 02:58:11.469047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.469225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.469256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.469440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.469471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.469711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.469744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.469918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.469951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.470067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.470099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.470222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.470252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.470378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.470411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.470590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.470621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.470800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.470831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.470983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.471017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.471293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.471326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.471494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.471526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.471720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.471752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.471952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.471985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.472172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.472204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.472380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.472412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.472590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.472621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.472894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.472926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.473043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.473075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.473365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.473397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.473586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.473617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.473890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.473923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.474100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.474133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.474262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.474294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.474432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.474464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.474650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.474683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.474946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.474979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.475165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.475196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.475390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.475422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.475538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.475570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.475673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.475704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.475976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.476009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.476295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.476326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.476460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.476491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.476681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.476714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.476978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.477011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.477212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.477250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.477434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.975 [2024-12-16 02:58:11.477466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.975 qpair failed and we were unable to recover it. 00:36:40.975 [2024-12-16 02:58:11.477691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.477722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.477974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.478007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.478276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.478308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.478484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.478515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.478705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.478737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.479018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.479051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.479156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.479188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.479407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.479439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.479545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.479578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.479765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.479797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.480055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.480088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.480346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.480377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.480495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.480527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.480784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.480816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.481108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.481139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.481343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.481374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.481510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.481543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.481733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.481765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.481953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.481986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.482231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.482263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.482432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.482463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.482714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.482746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.482869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.482903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.483022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.483052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.483219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.483249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.483401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.483442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.483617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.483647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.483859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.483892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.484010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.484042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.484175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.484207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.484423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.484455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.484695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.484727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.484939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.484972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.485158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.485190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.485302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.485332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.485498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.485530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.485708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.485740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.485876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.485909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.486098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.976 [2024-12-16 02:58:11.486135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.976 qpair failed and we were unable to recover it. 00:36:40.976 [2024-12-16 02:58:11.486276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.486308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.486416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.486448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.486628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.486660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.486952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.486985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.487166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.487196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.487404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.487436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.487698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.487730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.487934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.487967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.488179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.488211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.488401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.488433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.488567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.488598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.488791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.488823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.489006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.489038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.489221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.489253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.489440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.489471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.489588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.489620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.489759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.489791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.489939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.489971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.490215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.490247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.490435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.490466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.490650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.490682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.490867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.490900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.491172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.491204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.491404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.491435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.491622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.491654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.491926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.491959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.492155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.492187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.492361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.492394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.492638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.492670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.492931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.492963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.493099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.493130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.493250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.493282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.493463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.493495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.493688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.493720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.977 qpair failed and we were unable to recover it. 00:36:40.977 [2024-12-16 02:58:11.493856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.977 [2024-12-16 02:58:11.493888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.493994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.494024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.494138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.494171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.494351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.494382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.494570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.494601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.494789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.494826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.494957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.494990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.495173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.495206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.495382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.495414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.495614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.495646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.495861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.495894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.496027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.496058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.496253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.496284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.496537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.496570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.496831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.496874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.497046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.497077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.497292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.497324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.497435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.497466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.497646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.497678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.497890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.497924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.498026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.498058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.498321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.498352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.498538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.498571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.498698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.498730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.498927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.498959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.499139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.499171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.499453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.499485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.499674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.499705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.499884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.499918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.500107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.500139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.500332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.500363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.500572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.500604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.500795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.500828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.501009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.501041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.501225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.501258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.501428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.501460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.501645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.501677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.501890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.501923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.502095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.502126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.978 qpair failed and we were unable to recover it. 00:36:40.978 [2024-12-16 02:58:11.502324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.978 [2024-12-16 02:58:11.502356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.502641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.502673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.502866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.502897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.503145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.503176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.503376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.503409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.503547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.503578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.503841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.503886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.504097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.504129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.504383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.504414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.504652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.504684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.504954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.504987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.505248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.505279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.505394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.505425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.505554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.505587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.505797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.505828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.506049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.506082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.506276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.506308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.506543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.506574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.506712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.506744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.506883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.506916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.507052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.507084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.507260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.507292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.507503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.507535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.507730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.507761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.507953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.507986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.508177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.508210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.508335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.508366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.508485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.508517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.508758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.508791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.509007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.509039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.509250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.509282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.509467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.509500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.509608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.509639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.509868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.509933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.510170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.510211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.510425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.510458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.510572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.510608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.510749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.510782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.511045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.511090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.979 qpair failed and we were unable to recover it. 00:36:40.979 [2024-12-16 02:58:11.511278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.979 [2024-12-16 02:58:11.511310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.511556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.511592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.511709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.511740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.511885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.511918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.512116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.512153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.512346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.512377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.512620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.512655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.512920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.512972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.513158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.513192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.513403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.513435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.513695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.513727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.513968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.514014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.514262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.514294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.514509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.514546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.514783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.514820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.515123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.515158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.515300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.515333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.515518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.515549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.515741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.515843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.516109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.516199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.516331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.516364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.516490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.516522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.516642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.516674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.516867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.516900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.517086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.517117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.517314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.517346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.517523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.517555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.517677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.517708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.517929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.517963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.518104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.518136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.518317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.518348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.518527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.518559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.518673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.518704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.518900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.518933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.519202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.519236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.519406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.519438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.519694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.519726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.519944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.519977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.520152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.520183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.980 [2024-12-16 02:58:11.520445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.980 [2024-12-16 02:58:11.520476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.980 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.520723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.520755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.521038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.521070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.521251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.521283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.521481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.521513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.521718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.521749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.521992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.522024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.522210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.522243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.522410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.522447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.522685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.522717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.522862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.522896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.523031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.523062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.523244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.523275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.523383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.523417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.523595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.523627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.523805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.523837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.524015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.524047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.524222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.524253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.524491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.524524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.524762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.524793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.524990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.525022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.525198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.525230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.525423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.525455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.525627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.525659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.525928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.525961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.526206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.526238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.526340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.526370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.526563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.526595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.526730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.526762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.527001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.527033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.527224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.527256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.527519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.527551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.527724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.527755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.527890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.527925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.528109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.528140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.528366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.981 [2024-12-16 02:58:11.528437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.981 qpair failed and we were unable to recover it. 00:36:40.981 [2024-12-16 02:58:11.528579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.528616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.528747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.528780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.529041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.529075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.529287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.529320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.529524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.529557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.529739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.529771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.529903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.529937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.530080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.530111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.530319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.530351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.530633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.530666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.530863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.530897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.531081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.531112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.531322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.531353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.531502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.531534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.531710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.531742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.531877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.531910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.532084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.532116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.532359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.532390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.532596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.532628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.532819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.532860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.532982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.533014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.533134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.533165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.533382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.533413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.533620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.533652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.533756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.533786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.533920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.533953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.534125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.534162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.534278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.534310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.534501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.534533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.534726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.534758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.534947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.534980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.535120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.535152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.535272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.535304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.535557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.535588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.535833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.535874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.536056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.536087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.536343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.536374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.536664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.536695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.982 [2024-12-16 02:58:11.536920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.982 [2024-12-16 02:58:11.536953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.982 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.537190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.537223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.537408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.537441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.537695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.537726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.537936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.537969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.538113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.538145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.538286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.538318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.538433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.538464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.538613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.538645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.538771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.538802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.539055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.539088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.539284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.539316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.539546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.539577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.539692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.539724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.539989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.540023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.540132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.540163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.540339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.540371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.540565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.540597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.540808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.540839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.540982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.541014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.541205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.541236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.541360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.541391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.541574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.541606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.541785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.541817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.542011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.542047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.542248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.542278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.542540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.542572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.542743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.542774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.542951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.542985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.543271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.543309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.543571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.543603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.543868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.543900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.544114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.544146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.544339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.544371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.544607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.544638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.544896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.544930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.545122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.545153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.545357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.545388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.545515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.545548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.545763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.545795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.983 qpair failed and we were unable to recover it. 00:36:40.983 [2024-12-16 02:58:11.546076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.983 [2024-12-16 02:58:11.546108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.546304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.546336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.546452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.546483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.546668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.546701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.546958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.546991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.547123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.547154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.547424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.547456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.547695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.547727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.547863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.547895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.548105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.548137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.548257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.548288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.548459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.548491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.548730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.548762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.548877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.548912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.549153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.549185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.549430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.549462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.549705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.549742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.549980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.550013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.550188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.550220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.550509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.550540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.550743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.550775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.550992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.551025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.551198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.551230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.551414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.551446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.551706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.551738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.552003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.552036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.552299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.552330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.552463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.552496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.552688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.552719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.552864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.552897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.553023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.553056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.553231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.553262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.553517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.553549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.553758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.553790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.553910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.553942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.554129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.554160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.554280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.554313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.554439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.554470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.554675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.554707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.554901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.984 [2024-12-16 02:58:11.554934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.984 qpair failed and we were unable to recover it. 00:36:40.984 [2024-12-16 02:58:11.555120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.555151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.555328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.555360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.555546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.555577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.555769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.555801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.556086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.556120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.556309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.556340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.556520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.556551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.556739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.556772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.556961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.556994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.557123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.557154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.557284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.557317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.557578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.557609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.557855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.557888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.558006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.558039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.558219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.558251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.558540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.558572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.558784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.558815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.559024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.559063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.559272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.559304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.559491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.559523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.559765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.559797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.559921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.559956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.560194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.560225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.560407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.560440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.560545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.560578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.560764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.560796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.560995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.561028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.561268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.561300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.561487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.561518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.561655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.561686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.561890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.561924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.562036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.562068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.562328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.562360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.562539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.562570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.562702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.562733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.562975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.563008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.563135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.563166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.563352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.563383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.563541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.563573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.985 [2024-12-16 02:58:11.563785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.985 [2024-12-16 02:58:11.563815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.985 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.564018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.564077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.564228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.564264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.564401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.564437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.564641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.564682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.564890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.564943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.565148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.565184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.565405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.565439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.565670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.565705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.565990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.566033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.566274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.566309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.566523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.566555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.566727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.566758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.566948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.566982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.567185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.567217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.567355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.567387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.567563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.567594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.567871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.567904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.568029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.568060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.568325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.568358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.568488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.568520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.568760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.568792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.568995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.569028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.569203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.569234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.569494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.569526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.569660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.569693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.569830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.569876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.570119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.570151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.570390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.570421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.570688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.570720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.570920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.570953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.571125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.571156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.571270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.571308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.571549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.571581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.571761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.571791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.986 [2024-12-16 02:58:11.571936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.986 [2024-12-16 02:58:11.571969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.986 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.572109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.572139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.572381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.572413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.572735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.572768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.572957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.572990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.573171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.573202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.573397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.573430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.573670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.573701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.573962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.573995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.574271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.574303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.574559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.574591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.574786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.574831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.575044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.575080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.575273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.575306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.575495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.575538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.575670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.575710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.575830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.575884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.576112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.576146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.576366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.576398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.576634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.576665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.576783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.576815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.577094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.577126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.577390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.577422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.577631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.577662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.577866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.577899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.578168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.578199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.578377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.578409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.578619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.578652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.578867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.578900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.579154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.579186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.579399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.579431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.579620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.579651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.579777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.579809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.579954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.579986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.580178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.580210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.580416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.580448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.580690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.580722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.580854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.580887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.581080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.987 [2024-12-16 02:58:11.581117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.987 qpair failed and we were unable to recover it. 00:36:40.987 [2024-12-16 02:58:11.581290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.581322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.581514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.581545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.581727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.581759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.581946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.581979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.582179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.582211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.582424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.582456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.582646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.582677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.582871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.582904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.583028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.583060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.583202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.583233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.583419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.583451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.583633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.583665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.583797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.583829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.583965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.583998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.584193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.584225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.584428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.584460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.584585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.584616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.584739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.584770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.584901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.584934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.585066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.585098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.585216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.585247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.585494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.585526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.585739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.585771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.585873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.585906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.586089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.586120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.586337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.586370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.586551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.586588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.586699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.586733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.586867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:40.988 [2024-12-16 02:58:11.586898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:40.988 qpair failed and we were unable to recover it. 00:36:40.988 [2024-12-16 02:58:11.587103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.268 [2024-12-16 02:58:11.587136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.268 qpair failed and we were unable to recover it. 00:36:41.268 [2024-12-16 02:58:11.587326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.268 [2024-12-16 02:58:11.587359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.268 qpair failed and we were unable to recover it. 00:36:41.268 [2024-12-16 02:58:11.587591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.268 [2024-12-16 02:58:11.587622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.268 qpair failed and we were unable to recover it. 00:36:41.268 [2024-12-16 02:58:11.587746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.268 [2024-12-16 02:58:11.587779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.268 qpair failed and we were unable to recover it. 00:36:41.268 [2024-12-16 02:58:11.587901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.268 [2024-12-16 02:58:11.587934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.268 qpair failed and we were unable to recover it. 00:36:41.268 [2024-12-16 02:58:11.588126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.268 [2024-12-16 02:58:11.588158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.268 qpair failed and we were unable to recover it. 00:36:41.268 [2024-12-16 02:58:11.588347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.268 [2024-12-16 02:58:11.588378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.588621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.588652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.588833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.588882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.589127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.589159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.589367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.589398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.589521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.589554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.589746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.589777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.589952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.589985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.590116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.590148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.590260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.590294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.590470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.590501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.590629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.590661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.590827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.590871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.591065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.591096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.591375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.591407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.591536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.591568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.591670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.591701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.591887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.591921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.592053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.592085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.592212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.592244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.592437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.592470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.592594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.592625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.592808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.592840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.593091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.593123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.593305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.593336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.593454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.593486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.593670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.593702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.593807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.593838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.593947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.593979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.594161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.594192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.594363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.594395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.594514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.594546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.594718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.594761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.594957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.594990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.595230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.595262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.595390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.595423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.595554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.595585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.595775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.595806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.596018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.596052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.269 [2024-12-16 02:58:11.596183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.269 [2024-12-16 02:58:11.596214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.269 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.596407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.596438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.596614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.596646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.596861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.596894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.597132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.597163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.597419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.597450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.597552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.597583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.597837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.597879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.598118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.598150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.598334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.598365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.598497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.598544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.598729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.598762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.599016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.599050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.599243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.599275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.599513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.599545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.599725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.599756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.599971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.600004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.600147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.600179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.600352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.600384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.600666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.600698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.600833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.600879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.601050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.601081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.601317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.601348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.601538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.601570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.601776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.601808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.601930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.601962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.602225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.602256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.602372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.602403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.602519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.602550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.602815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.602856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.603103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.603135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.603319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.603351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.603544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.603574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.603756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.603788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.604015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.604049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.604165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.604198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.604385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.604417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.604595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.604626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.604806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.604839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.270 [2024-12-16 02:58:11.604976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.270 [2024-12-16 02:58:11.605008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.270 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.605246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.605278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.605469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.605501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.605692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.605724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.605910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.605943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.606157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.606189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.606369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.606401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.606504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.606536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.606661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.606693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.606810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.606842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.607057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.607089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.607214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.607246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.607422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.607454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.607664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.607695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.607910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.607944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.608117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.608149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.608385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.608416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.608536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.608568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.608754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.608784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.608955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.608987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.609174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.609207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.609319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.609351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.609470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.609507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.609650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.609681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.609877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.609910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.610039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.610070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.610250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.610282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.610463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.610494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.610609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.610641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.610755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.610787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.610907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.610939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.611042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.611073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.611313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.611345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.611535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.611566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.611763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.611794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.611991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.612024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.612259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.612291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.612587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.612619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.612816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.612857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.613043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.613076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.271 qpair failed and we were unable to recover it. 00:36:41.271 [2024-12-16 02:58:11.613246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.271 [2024-12-16 02:58:11.613276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.613514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.613547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.613743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.613775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.613963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.613996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.614128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.614160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.614300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.614332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.614443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.614474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.614668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.614700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.614830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.614868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.615112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.615150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.615268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.615302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.615408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.615438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.615623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.615657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.615784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.615817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.616002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.616035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.616219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.616250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.616367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.616398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.616656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.616688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.616807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.616839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.617154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.617186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.617355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.617386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.617520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.617552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.617792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.617824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.618102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.618135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.618343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.618376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.618635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.618667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.618791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.618821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.618971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.619004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.619190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.619221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.619390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.619421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.619614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.619650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.619923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.619956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.620217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.620248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.620464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.620495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.620693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.620724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.620875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.620908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.621105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.621137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.621319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.621351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.621533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.621564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.621741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.272 [2024-12-16 02:58:11.621773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.272 qpair failed and we were unable to recover it. 00:36:41.272 [2024-12-16 02:58:11.622014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.622046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.622236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.622267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.622507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.622540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.622739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.622771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.622979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.623013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.623255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.623287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.623403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.623435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.623609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.623640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.623813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.623845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.624061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.624092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.624273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.624311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.624594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.624627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.624824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.624865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.625054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.625086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.625300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.625333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.625586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.625617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.625811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.625843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.626121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.626153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.626271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.626302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.626494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.626526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.626769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.626801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.626930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.626963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.627089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.627121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.627366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.627399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.627664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.627696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.627875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.627908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.628036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.628068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.628255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.628286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.628533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.628565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.628744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.628777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.628968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.629001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.273 [2024-12-16 02:58:11.629177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.273 [2024-12-16 02:58:11.629209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.273 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.629478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.629510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.629639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.629671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.629809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.629841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.630055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.630088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.630356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.630388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.630560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.630591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.630771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.630804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.631021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.631054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.631290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.631321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.631530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.631563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.631743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.631774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.631894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.631928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.632124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.632156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.632359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.632390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.632577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.632608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.632864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.632897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.633008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.633039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.633165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.633196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.633469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.633501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.633698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.633731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.633922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.633954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.634154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.634187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.634396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.634428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.634692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.634723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.634931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.634965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.635211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.635243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.635480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.635512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.635779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.635811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.635940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.635974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.636180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.636211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.636433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.636465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.636586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.636618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.636747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.636779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.636979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.637014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.637204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.637236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.637478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.637509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.637710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.637742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.637887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.637920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.638132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.274 [2024-12-16 02:58:11.638164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.274 qpair failed and we were unable to recover it. 00:36:41.274 [2024-12-16 02:58:11.638356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.638388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.638578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.638609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.638805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.638836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.639034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.639067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.639307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.639339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.639594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.639626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.639840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.639880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.640065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.640107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.640349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.640381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.640508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.640540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.640720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.640752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.640925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.640958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.641158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.641190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.641376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.641408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.641525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.641557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.641675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.641708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.641833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.641892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.642156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.642188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.642328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.642360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.642532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.642564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.642745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.642777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.642908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.642943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.643205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.643237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.643408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.643440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.643694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.643727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.643979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.644012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.644196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.644228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.644353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.644386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.644592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.644623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.644737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.644769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.645037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.645070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.645245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.645276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.645409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.645441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.645616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.645648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.645828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.645867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.646109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.646141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.646265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.646296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.646498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.646530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.646789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.646820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.275 [2024-12-16 02:58:11.647016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.275 [2024-12-16 02:58:11.647049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.275 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.647313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.647344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.647628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.647659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.647870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.647903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.648112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.648143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.648257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.648289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.648554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.648585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.648768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.648799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.648998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.649030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.649154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.649191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.649377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.649409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.649626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.649658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.649798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.649831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.650050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.650082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.650269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.650301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.650561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.650594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.650764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.650796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.650934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.650966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.651158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.651191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.651431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.651462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.651597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.651628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.651871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.651903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.652086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.652117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.652387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.652419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.652614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.652646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.652838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.652876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.653068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.653099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.653229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.653262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.653391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.653423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.653543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.653575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.653812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.653845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.654043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.276 [2024-12-16 02:58:11.654078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.276 qpair failed and we were unable to recover it. 00:36:41.276 [2024-12-16 02:58:11.654253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.654285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.654552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.654585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.654686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.654715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.654884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.654917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.655188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.655226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.655432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.655464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.655576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.655607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.655798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.655829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.656105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.656138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.656368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.656400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.656525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.656557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.656816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.656856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.656977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.657009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.657200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.657232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.657427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.657458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.657631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.657662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.657789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.657821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.658053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.658085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.658204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.658236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.658373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.658404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.658572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.658603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.658869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.658902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.659084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.659116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.659238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.659270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.659458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.659489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.659742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.659775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.659969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.660002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.660186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.660217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.660350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.660382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.660644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.660675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.660841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.660881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.661130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.661162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.661343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.661374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.661566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.661597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.661894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.661927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.662168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.662201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.662307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.662338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.277 qpair failed and we were unable to recover it. 00:36:41.277 [2024-12-16 02:58:11.662630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.277 [2024-12-16 02:58:11.662662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.662899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.662932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.663109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.663141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.663409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.663441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.663683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.663715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.663918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.663952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.664199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.664231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.664402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.664434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.664685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.664722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.664909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.664943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.665078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.665109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.665355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.665386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.665582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.665614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.665789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.665820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.665943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.665975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.666105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.666137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.666258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.666289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.666501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.666533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.666740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.666773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.666902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.666935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.667105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.667138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.667404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.667436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.667626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.667658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.667789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.667820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.667958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.667991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.668206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.668237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.668430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.668461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.668656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.668688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.668807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.668838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.669072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.669104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.669208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.669242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.669470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.669502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.669766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.669797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.670196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.670230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.670424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.670456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.670589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.278 [2024-12-16 02:58:11.670627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.278 qpair failed and we were unable to recover it. 00:36:41.278 [2024-12-16 02:58:11.670869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.670902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.671146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.671178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.671297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.671328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.671500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.671532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.671821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.671860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.672115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.672147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.672313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.672345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.672488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.672520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.672721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.672752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.672882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.672915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.673089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.673122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.673347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.673379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.673577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.673609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.673885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.673920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.674168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.674200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.674387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.674419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.674689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.674721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.674983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.675017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.675155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.675187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.675457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.675489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.675693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.675724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.675988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.676021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.676203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.676235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.676432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.676464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.676580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.676612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.676735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.676767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.677005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.677038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.677228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.677260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.677454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.677487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.677675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.677706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.677975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.678007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.678195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.678227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.678401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.678432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.678614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.678646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.678825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.678874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.679051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.679083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.679348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.679379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.679483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.679518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.679711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.279 [2024-12-16 02:58:11.679743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.279 qpair failed and we were unable to recover it. 00:36:41.279 [2024-12-16 02:58:11.679986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.680019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.680209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.680247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.680487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.680519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.680623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.680655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.680896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.680929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.681124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.681156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.681419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.681450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.681577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.681609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.681845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.681886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.682001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.682033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.682156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.682188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.682443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.682475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.682737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.682768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.682911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.682944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.683184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.683216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.683430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.683462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.683672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.683704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.683966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.684000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.684267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.684298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.684510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.684543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.684802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.684833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.685024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.685056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.685301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.685333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.685542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.685573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.685836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.685886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.686099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.686131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.686269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.686301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.686513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.686545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.686734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.686777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.686963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.686997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.687129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.687161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.687300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.687332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.687506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.687537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.687726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.687758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.688032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.688066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.688197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.688229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.688444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.688476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.688650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.688683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.688918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.688951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.280 [2024-12-16 02:58:11.689144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.280 [2024-12-16 02:58:11.689176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.280 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.689304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.689337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.689507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.689539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.689724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.689757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.689950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.689984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.690251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.690283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.690401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.690433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.690565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.690598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.690795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.690826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.691009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.691041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.691183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.691215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.691481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.691513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.691632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.691664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.691924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.691957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.692200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.692232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.692407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.692439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.692554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.692587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.692757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.692788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.693038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.693071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.693313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.693345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.693471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.693502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.693772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.693805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.694001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.694034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.694151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.694183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.694375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.694408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.694671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.694702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.694835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.694874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.695069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.695102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.695269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.695301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.695480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.695512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.695704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.695742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.696000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.696033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.696136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.696168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.696421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.696454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.696716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.696747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.696940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.696972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.697208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.697240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.697447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.697479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.697665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.697696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.697883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.281 [2024-12-16 02:58:11.697915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.281 qpair failed and we were unable to recover it. 00:36:41.281 [2024-12-16 02:58:11.698048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.698080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.698335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.698367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.698556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.698588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.698772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.698804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.698937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.698971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.699093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.699123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.699239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.699271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.699516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.699548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.699751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.699783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.700031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.700064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.700355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.700387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.700506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.700537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.700715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.700747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.700925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.700959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.701079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.701110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.701357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.701389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.701611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.701642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.701835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.701888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.702162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.702195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.702369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.702400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.702597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.702628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.702822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.702864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.703141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.703173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.703424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.703455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.703655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.703687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.703879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.703912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.704107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.704138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.704325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.704358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.704528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.704560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.704680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.704712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.704923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.282 [2024-12-16 02:58:11.704957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.282 qpair failed and we were unable to recover it. 00:36:41.282 [2024-12-16 02:58:11.705125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.705196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.705487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.705523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.705800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.705833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.706116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.706148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.706275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.706306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.706486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.706518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.706701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.706733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.706934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.706972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.707160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.707193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.707318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.707350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.707489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.707521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.707638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.707669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.707867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.707900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.708186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.708228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.708360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.708392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.708510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.708545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.708737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.708767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.708951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.708983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.709158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.709190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.709377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.709409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.709672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.709703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.709835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.709876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.709995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.710028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.710150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.710182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.710374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.710405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.710515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.710547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.710802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.710834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.711041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.711074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.711248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.711279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.711457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.711490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.711600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.711630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.711813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.711857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.712033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.712066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.712328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.712360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.712542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.712574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.712701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.712732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.712968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.713001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.283 qpair failed and we were unable to recover it. 00:36:41.283 [2024-12-16 02:58:11.713188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.283 [2024-12-16 02:58:11.713219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.713499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.713531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.713651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.713682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.713865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.713902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.714141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.714173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.714294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.714326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.714561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.714593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.714775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.714806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.714954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.714987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.715229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.715261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.715429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.715460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.715647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.715679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.715865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.715898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.716004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.716035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.716277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.716308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.716578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.716610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.716798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.716829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.717056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.717088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.717308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.717340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.717536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.717568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.717832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.717877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.718008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.718040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.718241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.718272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.718403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.718434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.718644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.718676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.718799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.718830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.718955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.718988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.719196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.719228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.719449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.719481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.719599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.719634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.719811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.719860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.720002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.720035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.720155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.720187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.720391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.720423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.720665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.720697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.720875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.720908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.721019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.721051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.721245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.721277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.284 [2024-12-16 02:58:11.721516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.284 [2024-12-16 02:58:11.721548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.284 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.721733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.721765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.721937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.721970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.722158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.722190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.722374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.722406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.722642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.722674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.722795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.722828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.723029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.723061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.723217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.723249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.723507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.723538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.723711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.723742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.723944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.723978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.724126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.724157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.724397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.724429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.724617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.724648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.724779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.724811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.725002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.725035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.725302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.725333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.725519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.725550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.725735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.725767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.725965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.725999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.726104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.726136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.726314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.726345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.726514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.726545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.726862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.726896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.727007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.727039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.727242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.727273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.727431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.727463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.727727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.727758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.727950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.727983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.728245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.728277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.728450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.728482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.728669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.728701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.728883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.728921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.729185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.729216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.729409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.285 [2024-12-16 02:58:11.729441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.285 qpair failed and we were unable to recover it. 00:36:41.285 [2024-12-16 02:58:11.729622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.729653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.729758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.729790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.730006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.730039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.730220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.730251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.730443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.730474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.730698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.730730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.730857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.730890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.731095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.731127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.731250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.731282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.731406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.731439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.731570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.731601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.731712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.731744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.731979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.732013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.732183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.732215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.732318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.732350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.732562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.732594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.732697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.732729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.732831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.732870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.732989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.733020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.733211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.733243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.733427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.733458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.733696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.733727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.733865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.733899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.734070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.734101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.734293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.734330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.734539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.734571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.734812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.734843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.735031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.735063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.735178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.735210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.735414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.735445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.735627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.735659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.735864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.735896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.736003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.736034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.736214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.736246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.736445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.736476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.736697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.736729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.736835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.736877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.737077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.737109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.737356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.737388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.286 [2024-12-16 02:58:11.737505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.286 [2024-12-16 02:58:11.737536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.286 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.737778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.737809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.738000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.738033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.738294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.738326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.738513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.738544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.738799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.738831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.739046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.739078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.739204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.739235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.739500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.739532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.739650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.739682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.739940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.739973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.740147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.740178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.740363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.740395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.740581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.740614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.740861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.740894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.741064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.741103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.741293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.741324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.741520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.741552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.741732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.741764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.741893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.741925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.742130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.742162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.742425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.742456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.742630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.742661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.742839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.742884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.743010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.743042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.743180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.743211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.743388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.743426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.743608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.743639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.743834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.743877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.744061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.744093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.744280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.744312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.744428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.744459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.744628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.744660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.744830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.744870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.745043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.287 [2024-12-16 02:58:11.745074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.287 qpair failed and we were unable to recover it. 00:36:41.287 [2024-12-16 02:58:11.745188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.745219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.745403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.745435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.745550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.745581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.745794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.745825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.746016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.746048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.746222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.746255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.746427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.746459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.746596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.746627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.746878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.746912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.747152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.747184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.747368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.747400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.747586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.747618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.747818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.747856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.748118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.748149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.748319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.748350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.748525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.748557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.748741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.748772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.748959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.748992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.749253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.749291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.749467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.749499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.749786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.749818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.749972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.750005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.750191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.750223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.750494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.750526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.750751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.750782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.751023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.751057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.751317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.751349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.751589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.751620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.751865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.751898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.752077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.752109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.752312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.752343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.752517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.752549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.752682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.752715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.752912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.752945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.753124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.753156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.753424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.753455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.753643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.753675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.753861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.753893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.754138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.754169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.754409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.288 [2024-12-16 02:58:11.754440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.288 qpair failed and we were unable to recover it. 00:36:41.288 [2024-12-16 02:58:11.754703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.754735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.754976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.755009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.755251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.755283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.755519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.755551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.755683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.755715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.755854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.755887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.756128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.756160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.756340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.756371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.756554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.756585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.756785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.756817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.757004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.757036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.757236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.757268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.757463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.757495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.757734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.757765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.757898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.757931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.758126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.758158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.758398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.758430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.758553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.758585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.758871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.758929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.759132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.759170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.759349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.759381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.759583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.759614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.759794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.759825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.760018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.760051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.760200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.760232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.760404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.760435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.760552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.760585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.760761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.760792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.760986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.761019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.761144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.761177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.761352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.761383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.761627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.761658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.761898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.761932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.762117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.762149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.762333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.762364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.762624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.762656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.762789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.762821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.763012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.763045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.763170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.763201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.289 [2024-12-16 02:58:11.763389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.289 [2024-12-16 02:58:11.763421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.289 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.763530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.763561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.763746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.763777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.763967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.764001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.764173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.764204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.764304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.764335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.764531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.764562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.764804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.764842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.764976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.765008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.765129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.765160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.765344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.765376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.765570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.765602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.765733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.765765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.766025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.766059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.766252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.766284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.766557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.766589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.766770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.766801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.767006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.767039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.767251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.767283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.767409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.767440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.767619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.767651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.767894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.767958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.768098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.768133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.768374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.768407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.768627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.768658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.768827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.768889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.769025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.769057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.769249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.769280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.769519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.769550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.769669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.769701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.769822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.769869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.770067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.770098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.770273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.770305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.770493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.770524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.770772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.770812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.771094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.771127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.771326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.771357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.771619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.771651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.771774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.771805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.290 [2024-12-16 02:58:11.772059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.290 [2024-12-16 02:58:11.772092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.290 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.772264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.772295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.772506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.772537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.772793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.772825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.773030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.773063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.773303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.773334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.773467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.773499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.773634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.773665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.773916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.773950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.774172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.774204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.774392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.774423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.774610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.774641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.774772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.774803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.775024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.775058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.775174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.775206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.775388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.775419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.775590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.775621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.775884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.775917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.776039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.776070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.776180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.776212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.776479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.776509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.776693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.776725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.776919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.776954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.777125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.777157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.777341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.777372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.777633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.777663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.777845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.777892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.778152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.778183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.778446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.778477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.778768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.778800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.778997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.779031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.779218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.779249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.779440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.779471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.779664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.779695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.779892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.291 [2024-12-16 02:58:11.779925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.291 qpair failed and we were unable to recover it. 00:36:41.291 [2024-12-16 02:58:11.780097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.780134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.780259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.780291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.780402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.780433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.780617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.780648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.780828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.780875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.781069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.781100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.781234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.781266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.781436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.781467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.781584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.781615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.781863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.781897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.782016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.782048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.782263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.782294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.782494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.782525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.782704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.782735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.783011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.783044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.783282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.783313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.783447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.783479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.783743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.783775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.784025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.784059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.784244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.784274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.784544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.784575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.784694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.784726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.784986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.785019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.785224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.785256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.785480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.785511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.785646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.785676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.785861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.785894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.786120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.786203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.786456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.786501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.786774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.786807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.787081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.787118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.787308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.787340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.787522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.787553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.787724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.787755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.788025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.788060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.788242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.788274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.788401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.788433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.788691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.788722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.788910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.292 [2024-12-16 02:58:11.788943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.292 qpair failed and we were unable to recover it. 00:36:41.292 [2024-12-16 02:58:11.789144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.789176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.789363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.789411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.789586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.789618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.789807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.789839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.790045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.790077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.790243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.790274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.790458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.790489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.790673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.790704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.790827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.790872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.791047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.791079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.791362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.791392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.791659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.791691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.791937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.791970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.792230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.792261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.792446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.792477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.792740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.792772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.792983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.793016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.793296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.793328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.793513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.793544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.793804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.793835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.794034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.794067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.794255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.794286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.794414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.794444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.794626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.794657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.794895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.794928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.795114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.795146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.795381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.795411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.795539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.795570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.795759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.795790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.795984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.796017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.796124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.796154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.796358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.796390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.796600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.796635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.796762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.796794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.796915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.796949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.797197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.797227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.797415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.797447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.797619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.797650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.797765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.293 [2024-12-16 02:58:11.797796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.293 qpair failed and we were unable to recover it. 00:36:41.293 [2024-12-16 02:58:11.797995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.798028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.798217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.798249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.798465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.798496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.798739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.798770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.799015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.799048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.799236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.799267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.799468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.799499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.799687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.799718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.799827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.799869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.799974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.800005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.800128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.800159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.800352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.800382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.800669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.800700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.800869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.800901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.801106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.801138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.801357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.801389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.801576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.801608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.801858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.801890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.802064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.802095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.802355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.802387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.802512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.802542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.802671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.802703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.802901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.802934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.803036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.803067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.803250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.803281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.803402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.803433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.803567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.803598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.803834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.803875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.803989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.804019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.804274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.804311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.804500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.804532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.804775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.804805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.805006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.805038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.805298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.805329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.805595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.805626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.805813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.294 [2024-12-16 02:58:11.805844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.294 qpair failed and we were unable to recover it. 00:36:41.294 [2024-12-16 02:58:11.806054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.806085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.806269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.806299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.806409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.806441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.806678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.806709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.806882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.806915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.807121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.807151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.807366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.807397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.807515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.807547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.807723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.807754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.807890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.807923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.808105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.808135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.808327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.808358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.808485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.808516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.808778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.808809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.809068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.809101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.809216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.809247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.809431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.809462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.809719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.809750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.809933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.809965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.810084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.810115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.810252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.810283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.810465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.810497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.810702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.810733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.811014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.811047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.811150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.811181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.811306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.811337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.811474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.811505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.811699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.811731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.811913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.811945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.812151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.812183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.812429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.812460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.812626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.812657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.812769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.812800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.813083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.813119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.813316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.813348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.813529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.813559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.295 [2024-12-16 02:58:11.813845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.295 [2024-12-16 02:58:11.813884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.295 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.814018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.814050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.814317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.814348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.814530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.814561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.814702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.814732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.814918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.814950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.815153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.815185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.815319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.815349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.815523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.815554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.815760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.815792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.816012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.816044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.816320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.816352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.816563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.816594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.816864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.816896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.817135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.817165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.817286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.817317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.817457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.817489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.817663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.817693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.817827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.817877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.818001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.818032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.818293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.818324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.818519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.818550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.818748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.818780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.818992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.819024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.819293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.819324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.819504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.819535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.819789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.819820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.819936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.819968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.820100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.820131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.820369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.820400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.820574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.820605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.820870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.820902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.821028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.821058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.296 qpair failed and we were unable to recover it. 00:36:41.296 [2024-12-16 02:58:11.821240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.296 [2024-12-16 02:58:11.821271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.821403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.821434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.821721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.821751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.821933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.821965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.822218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.822255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.822503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.822534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.822723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.822753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.822894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.822927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.823126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.823156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.823347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.823378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.823646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.823678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.823913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.823945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.824095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.824126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.824257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.824287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.824526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.824558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.824736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.824767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.824935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.824968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.825206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.825237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.825492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.825523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.825692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.825723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.825977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.826009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.826291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.826322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.826567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.826598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.826702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.826734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.826905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.826938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.827205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.827236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.827428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.827460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.827701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.827732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.827922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.827954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.828167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.828198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.828367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.828399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.828603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.828634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.828815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.828855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.829031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.829062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.829267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.829298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.829424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.829454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.829559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.829590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.829702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.829733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.297 [2024-12-16 02:58:11.829875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.297 [2024-12-16 02:58:11.829907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.297 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.830103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.830135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.830310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.830340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.830532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.830563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.830744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.830775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.830896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.830929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.831107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.831145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.831401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.831431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.831625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.831656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.831844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.831883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.832010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.832041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.832230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.832261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.832495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.832525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.832710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.832741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.832865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.832896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.833006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.833037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.833220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.833252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.833441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.833471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.833585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.833617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.833776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.833807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.834001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.834035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.834162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.834193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.834362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.834394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.834518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.834549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.834810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.834840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.835049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.835081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.835280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.835311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.835478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.835509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.835775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.835806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.835931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.835963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.836108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.836140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.836243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.836274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.298 [2024-12-16 02:58:11.836529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.298 [2024-12-16 02:58:11.836560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.298 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.836690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.836721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.837010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.837043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.837270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.837301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.837482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.837512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.837751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.837782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.837967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.838000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.838282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.838312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.838493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.838524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.838711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.838742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.838946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.838978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.839175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.839206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.839341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.839373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.839557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.839588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.839702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.839739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.839932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.839964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.840186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.840217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.840477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.840508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.840747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.840777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.840959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.840992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.841182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.841212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.841347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.841378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.841605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.841636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.841800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.841832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.842030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.842061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.842258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.842289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.842574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.842605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.842735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.842766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.843033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.843067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.843170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.843200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.843314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.843344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.843515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.843547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.843650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.843680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.843889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.843921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.844092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.844122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.299 [2024-12-16 02:58:11.844248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.299 [2024-12-16 02:58:11.844279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.299 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.844518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.844549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.844674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.844705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.844897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.844930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.845172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.845203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.845394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.845425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.845603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.845635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.845766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.845797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.845986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.846018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.846258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.846289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.846468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.846499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.846668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.846699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.846821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.846859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.847051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.847082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.847293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.847324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.847562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.847594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.847867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.847899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.848156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.848187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.848365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.848396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.848608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.848644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.848830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.848873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.849120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.849151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.849415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.849445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.849629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.849660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.849829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.849887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.850074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.850106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.850356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.850388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.850490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.850520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.850719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.850750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.850989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.851022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.851204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.851234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.851439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.851471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.851708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.851739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.300 qpair failed and we were unable to recover it. 00:36:41.300 [2024-12-16 02:58:11.851994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.300 [2024-12-16 02:58:11.852027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.852155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.852186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.852355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.852386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.852592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.852623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.852750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.852780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.852908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.852941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.853148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.853179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.853312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.853343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.853466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.853497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.853677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.853707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.853888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.853921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.854100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.854131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.854240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.854270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.854393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.854426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.854606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.854636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.854816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.854855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.854993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.855025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.855141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.855173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.855285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.855317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.855503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.855534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.855714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.855745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.855922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.855955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.856222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.856253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.856367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.856398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.856634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.856664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.856780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.856812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.857001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.857037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.857292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.857324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.857492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.857522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.301 qpair failed and we were unable to recover it. 00:36:41.301 [2024-12-16 02:58:11.857707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.301 [2024-12-16 02:58:11.857737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.857864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.857896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.858102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.858134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.858265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.858296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.858556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.858587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.858772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.858803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.858942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.858974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.859149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.859180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.859380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.859412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.859618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.859648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.859911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.859951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.860128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.860160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.860280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.860310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.860576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.860607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.860868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.860900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.861088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.861119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.861387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.861418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.861555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.861586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.861841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.861912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.862058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.862089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.862292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.862323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.862447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.862477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.862605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.862636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.862876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.862909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.863050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.863081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.863336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.863367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.863502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.863533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.863740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.863771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.863911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.863944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.864124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.864155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.864330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.864361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.864478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.864508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.864679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.864709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.864881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.864914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.865151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.865182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.865350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.302 [2024-12-16 02:58:11.865381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.302 qpair failed and we were unable to recover it. 00:36:41.302 [2024-12-16 02:58:11.865585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.865616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.865734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.865770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.866029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.866062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.866307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.866338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.866452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.866482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.866658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.866689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.866902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.866935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.867043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.867073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.867333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.867364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.867546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.867576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.867695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.867726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.867931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.867963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.868131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.868162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.868277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.868307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.868547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.868578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.868753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.868785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.868916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.868948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.869228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.869259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.869538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.869569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.869744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.869774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.869975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.870007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.870192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.870224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.870477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.870507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.870622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.870652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.870783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.870813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.871015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.871054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.871255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.871287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.871548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.871579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.871717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.871749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.871942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.871976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.872162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.872193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.872320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.872350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.872550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.872580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.872749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.872779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.872960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.872992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.873187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.873217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.303 [2024-12-16 02:58:11.873405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.303 [2024-12-16 02:58:11.873436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.303 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.873672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.873703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.873817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.873854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.873970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.874002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.874189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.874220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.874345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.874381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.874513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.874544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.874749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.874779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.874910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.874942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.875131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.875162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.875278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.875310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.875550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.875581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.875820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.875863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.876038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.876072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.876248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.876280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.876427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.876457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.876706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.876738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.876919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.876954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.877127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.877158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.877286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.877318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.877508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.877539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.877784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.877815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.878047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.878080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.878356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.878387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.878568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.878598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.878727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.878758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.878878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.878911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.879124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.879155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.879278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.879309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.879588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.879619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.879767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.879798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.880013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.880045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.880187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.880220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.880346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.880377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.880560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.880592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.880706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.880737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.880976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.881008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.881115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.304 [2024-12-16 02:58:11.881145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.304 qpair failed and we were unable to recover it. 00:36:41.304 [2024-12-16 02:58:11.881349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.881380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.881553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.881584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.881721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.881752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.881946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.881979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.882216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.882247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.882373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.882404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.882599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.882631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.882802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.882838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.883020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.883052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.883164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.883195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.883384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.883414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.883529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.883561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.883746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.883778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.883909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.883941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.884119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.884150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.884338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.884370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.884483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.884514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.884624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.884656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.884826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.884866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.885040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.885072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.885257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.885288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.885396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.885427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.885549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.885580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.885764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.885795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.886097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.886130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.886303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.886334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.886441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.886472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.886598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.886628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.886815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.886855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.886968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.886999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.887179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.887209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.887343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.887374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.887520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.887551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.887731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.887762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.887947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.887981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.888164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.305 [2024-12-16 02:58:11.888195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.305 qpair failed and we were unable to recover it. 00:36:41.305 [2024-12-16 02:58:11.888314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.888345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.888516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.888546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.888652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.888683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.888919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.888953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.889064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.889095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.889260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.889291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.889416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.889448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.889564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.889595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.889768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.889799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.889997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.890030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.890211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.890242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.890468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.890505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.890687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.890719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.890840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.890880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.891149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.891180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.891416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.891446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.891643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.891673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.891870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.891903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.892145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.892175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.892349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.892381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.892494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.892525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.892694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.892725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.892966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.892999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.893191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.893221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.893464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.893496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.893686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.893717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.893918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.893950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.894205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.894236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.894355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.894386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.894508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.894539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.894657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.894686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.894802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.894832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.894952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.894984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.895100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.895131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.895306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.895337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.895451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.306 [2024-12-16 02:58:11.895481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.306 qpair failed and we were unable to recover it. 00:36:41.306 [2024-12-16 02:58:11.895675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.895706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.895833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.895872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.896076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.896107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.896292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.896324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.896465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.896496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.896614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.896644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.896818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.896861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.897101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.897132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.897253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.897284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.897400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.897430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.897638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.897668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.897770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.897800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.897970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.898004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.898127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.898158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.898268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.898298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.898421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.898458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.898588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.898619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.898813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.898843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.898956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.898987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.899248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.899278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.899471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.899504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.899634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.899666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.899776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.899806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.900035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.900067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.900191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.900223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.900344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.900374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.900500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.900531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.900654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.900687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.900952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.900986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.901170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.901202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.901426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.901457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.307 [2024-12-16 02:58:11.901720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.307 [2024-12-16 02:58:11.901751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.307 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.901887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.901919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.902066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.902097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.902344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.902374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.902482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.902512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.902724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.902756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.902925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.902958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.903062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.903093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.903210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.903240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.903422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.903453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.903662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.903697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.903810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.903842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.904029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.904061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.904188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.904219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.904336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.904368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.904514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.904545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.904720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.904751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.904891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.904923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.905095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.905125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.905310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.905340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.905532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.905563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.905736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.905766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.906024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.906056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.906224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.906254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.906399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.906429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.308 [2024-12-16 02:58:11.906616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.308 [2024-12-16 02:58:11.906648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.308 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.906826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.906869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.907087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.907120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.907322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.907352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.907496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.907526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.907645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.907676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.907866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.907897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.908021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.908052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.908168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.908199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.908369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.908399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.908641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.908672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.908806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.908837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.908972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.909004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.909182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.909218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.909322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.909352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.909454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.909482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.909716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.909750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.909940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.909972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.910090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.910119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.910219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.910248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.910447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.910478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.910607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.910639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.910814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.910844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.911022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.911053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.911295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.911325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.911448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.911479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.911648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.911685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.589 [2024-12-16 02:58:11.911866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.589 [2024-12-16 02:58:11.911898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.589 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.912001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.912031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.912207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.912238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.912409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.912439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.912620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.912651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.912829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.912870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.913051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.913081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.913317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.913349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.913467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.913497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.913679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.913709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.913908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.913942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.914092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.914122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.914305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.914336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.914522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.914553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.914725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.914755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.915015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.915047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.915162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.915192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.915364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.915395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.915603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.915634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.915802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.915833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.916089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.916121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.916289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.916320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.916488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.916519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.916625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.916655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.916767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.916797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.917062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.917094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.917327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.917359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.917544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.917574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.917811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.917842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.917968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.917999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.918109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.918139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.918279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.918310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.918438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.918468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.918638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.918669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.918839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.918884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.590 [2024-12-16 02:58:11.918990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.590 [2024-12-16 02:58:11.919022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.590 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.919119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.919150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.919338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.919369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.919543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.919574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.919682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.919718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.919822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.919862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.919993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.920023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.920136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.920168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.920334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.920365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.920543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.920573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.920694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.920725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.920841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.920882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.921000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.921031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.921289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.921319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.921505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.921536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.921650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.921680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.921801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.921832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.922107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.922139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.922253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.922284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.922386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.922416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.922590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.922621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.922722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.922753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.922940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.922973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.923217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.923248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.923348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.923378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.923614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.923645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.923945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.923977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.924218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.924248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.924436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.924466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.924641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.924672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.924844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.924881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.925062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.925095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.925288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.925318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.925505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.925536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.925723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.925752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.925930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.925962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.926094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.926126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.591 [2024-12-16 02:58:11.926305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.591 [2024-12-16 02:58:11.926336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.591 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.926597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.926627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.926731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.926761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.926932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.926965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.927135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.927166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.927346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.927377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.927609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.927639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.927876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.927914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.928152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.928183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.928299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.928329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.928607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.928637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.928830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.928871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.929064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.929094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.929261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.929292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.929479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.929510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.929681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.929712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.929833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.929871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.930049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.930079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.930246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.930276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.930395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.930426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.930542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.930573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.930751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.930782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.930994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.931027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.931136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.931167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.931269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.931300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.931428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.931458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.931641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.931672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.931794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.931824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.932029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.932060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.932239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.932270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.932513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.932543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.932713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.932744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.932990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.933023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.933211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.933242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.933417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.933449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.933633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.933664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.933845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.933883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.934069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.934100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.934271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.934302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.592 [2024-12-16 02:58:11.934548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.592 [2024-12-16 02:58:11.934579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.592 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.934763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.934793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.934997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.935030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.935266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.935296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.935579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.935610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.935783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.935814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.936044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.936077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.936259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.936290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.936461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.936497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.936668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.936700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.936871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.936903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.937072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.937103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.937320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.937351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.937557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.937587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.937784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.937815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.938003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.938034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.938225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.938256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.938376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.938406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.938585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.938615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.938746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.938777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.938883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.938916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.939035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.939065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.939190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.939222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.939341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.939371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.939552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.939583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.939826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.939865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.940131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.940163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.940393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.940423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.940606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.940637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.940857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.940890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.941020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.941050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.941223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.941254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.593 qpair failed and we were unable to recover it. 00:36:41.593 [2024-12-16 02:58:11.941443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.593 [2024-12-16 02:58:11.941479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.941671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.941701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.941892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.941925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.942118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.942150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.942266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.942298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.942532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.942563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.942747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.942778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.943066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.943098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.943289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.943320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.943473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.943504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.943745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.943776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.943983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.944015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.944200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.944231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.944359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.944390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.944578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.944609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.944720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.944751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.944922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.944960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.945208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.945239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.945453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.945483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.945651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.945682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.945869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.945901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.946147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.946178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.946386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.946417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.946605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.946635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.946837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.946891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.947023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.947055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.947168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.947198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.947385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.947416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.947543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.947575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.947762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.947792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.947929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.947961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.948064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.948096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.948389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.948421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.948704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.948735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.948865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.948897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.949080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.949110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.949299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.949330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.949516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.949548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.949661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.594 [2024-12-16 02:58:11.949691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.594 qpair failed and we were unable to recover it. 00:36:41.594 [2024-12-16 02:58:11.949793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.949824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.950023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.950055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.950230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.950262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.950522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.950553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.950740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.950771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.951032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.951065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.951303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.951334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.951517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.951548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.951786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.951817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.952020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.952052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.952175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.952206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.952380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.952411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.952540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.952571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.952692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.952722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.952892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.952924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.953101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.953131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.953321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.953351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.953473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.953510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.953747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.953780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.953898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.953930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.954054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.954085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.954273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.954304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.954469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.954500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.954690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.954720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.954843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.954900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.955015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.955047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.955148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.955179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.955306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.955337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.955464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.955495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.955665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.955696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.955941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.955974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.956167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.956198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.956392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.956423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.956543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.956573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.956749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.956780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.957020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.957052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.957223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.595 [2024-12-16 02:58:11.957254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.595 qpair failed and we were unable to recover it. 00:36:41.595 [2024-12-16 02:58:11.957369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.957400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.957514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.957544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.957648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.957679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.957801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.957832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.958021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.958053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.958258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.958289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.958473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.958504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.958623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.958654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.958759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.958790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.958903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.958941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.959127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.959159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.959330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.959361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.959546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.959577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.959745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.959776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.959893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.959926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.960032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.960063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.960300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.960331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.960511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.960541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.960753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.960783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.960896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.960929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.961115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.961151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.961323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.961354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.961534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.961564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.961692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.961723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.961915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.961948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.962130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.962161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.962275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.962306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.962489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.962520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.962693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.962723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.962905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.962938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.963130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.963160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.963265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.963296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.963449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.963479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.963649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.963680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.963952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.963985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.964098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.964129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.964253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.964285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.964519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.964549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.964732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.596 [2024-12-16 02:58:11.964763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.596 qpair failed and we were unable to recover it. 00:36:41.596 [2024-12-16 02:58:11.964941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.964975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.965081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.965112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.965352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.965383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.965488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.965520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.965704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.965735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.965838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.965877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.966157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.966190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.966299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.966330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.966446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.966478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.966649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.966680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.966796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.966827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.967015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.967047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.967179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.967210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.967395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.967426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.967635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.967666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.967844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.967887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.968033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.968065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.968179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.968209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.968321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.968352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.968518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.968548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.968722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.968753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.968875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.968913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.969062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.969093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.969212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.969243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.969382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.969412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.969550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.969582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.969704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.969735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.969841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.969879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.970060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.970091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.970212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.970243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.970420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.970451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.970557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.970588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.970693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.970724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.970843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.970912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.971125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.971157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.971276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.971308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.971416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.971447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.971554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.597 [2024-12-16 02:58:11.971584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.597 qpair failed and we were unable to recover it. 00:36:41.597 [2024-12-16 02:58:11.971704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.971734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.971903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.971935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.972175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.972206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.972337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.972367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.972471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.972503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.972684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.972715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.972839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.972883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.972993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.973024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.973194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.973225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.973415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.973446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.973678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.973747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.973898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.973936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.974049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.974084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.974216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.974249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.974364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.974395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.974569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.974601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.974792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.974823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.975013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.975044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.975159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.975188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.975352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.975383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.975499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.975529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.975642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.975674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.975779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.975808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.976063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.976096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.976214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.976246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.976353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.976384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.976516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.976546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.976679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.976712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.976972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.977007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.977273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.977305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.977421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.977453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.977580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.977612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.977804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.977836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.977986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.978019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.598 [2024-12-16 02:58:11.978128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.598 [2024-12-16 02:58:11.978160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.598 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.978357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.978389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.978501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.978533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.978655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.978695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.978808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.978840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.979037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.979070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.979245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.979277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.979393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.979426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.979617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.979649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.979769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.979801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.979934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.979968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.980085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.980117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.980252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.980284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.980455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.980488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.980603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.980636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.980762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.980795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.980929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.980964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.981141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.981174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.981431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.981463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.981593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.981626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.981753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.981785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.981914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.981945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.982129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.982162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.982344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.982377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.982489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.982521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.982655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.982687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.982812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.982844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.982971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.983004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.983182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.983216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.983545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.983577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.983698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.983738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.983859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.983891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.984072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.984105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.984239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.984272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.984478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.984509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.984631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.984665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.984900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.984934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.985114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.985146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.985328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.985361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.985504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.599 [2024-12-16 02:58:11.985536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.599 qpair failed and we were unable to recover it. 00:36:41.599 [2024-12-16 02:58:11.985636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.985667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.985834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.985874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.985991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.986024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.986133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.986166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.986393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.986431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.986629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.986661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.986899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.986933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.987049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.987081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.987267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.987301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.987405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.987434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.987611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.987644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.987764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.987796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.987982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.988015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.988131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.988161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.988271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.988303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.988502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.988534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.988643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.988673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.988859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.988898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.989025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.989058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.989355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.989386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.989556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.989589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.989777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.989808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.989944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.989976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.990095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.990127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.990337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.990369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.990491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.990523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.990701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.990734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.990909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.990944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.991153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.991185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.991364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.991396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.991507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.991540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.991785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.991818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.991957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.991995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.992124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.992157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.992335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.992366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.992547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.992580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.992720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.992752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.992883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.992917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.993184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.600 [2024-12-16 02:58:11.993217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.600 qpair failed and we were unable to recover it. 00:36:41.600 [2024-12-16 02:58:11.993410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.993442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.993560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.993592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.993763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.993795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.993919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.993953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.994055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.994088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.994189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.994235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.994362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.994396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.994516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.994547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.994666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.994699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.994811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.994844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.994972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.995005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.995115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.995147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.995327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.995359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.995463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.995496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.995619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.995652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.995892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.995926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.996167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.996200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.996389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.996422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.996546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.996579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.996696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.996734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.996844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.996885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.997075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.997106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.997225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.997256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.997372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.997402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.997525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.997559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.997731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.997763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.997894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.997929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.998051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.998084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.998208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.998241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.998370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.998403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.998505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.998536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.998667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.998699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.998816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.998859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.998976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.999009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.999177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.999209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.999336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.999368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.999499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.999533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.999638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.999667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:11.999798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:11.999830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:12.000033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.601 [2024-12-16 02:58:12.000068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.601 qpair failed and we were unable to recover it. 00:36:41.601 [2024-12-16 02:58:12.000192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.000225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.000406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.000439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.000547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.000579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.000691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.000723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.000932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.000966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.001077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.001109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.001228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.001267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.001371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.001403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.001518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.001551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.001723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.001755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.001873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.001906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.002091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.002125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.002236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.002269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.002461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.002494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.002620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.002653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.002825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.002865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.003045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.003078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.003217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.003250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.003373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.003405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.003511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.003544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.003666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.003699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.003817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.003857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.003975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.004008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.004107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.004137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.004319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.004352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.004484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.004517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.004630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.004661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.004780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.004813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.004990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.005024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.005207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.005240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.005414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.005446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.005566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.005597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.005778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.005809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.005995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.006035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.006145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.602 [2024-12-16 02:58:12.006178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.602 qpair failed and we were unable to recover it. 00:36:41.602 [2024-12-16 02:58:12.006283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.006313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.006449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.006483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.006723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.006756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.006892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.006927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.007037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.007070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.007183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.007215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.007391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.007423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.007546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.007578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.007818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.007875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.007986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.008019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.008148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.008181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.008369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.008402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.008522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.008555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.008762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.008795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.008995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.009029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.009272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.009305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.009438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.009471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.009652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.009686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.009800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.009832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.010012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.010047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.010174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.010206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.010329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.010361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.010473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.010505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.010615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.010647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.010751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.010784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.010897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.010931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.011118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.011153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.011277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.011309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.011488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.011521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.011634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.011667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.011779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.011811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.011944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.011979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.012099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.012132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.012299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.012331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.012439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.012473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.012643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.012676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.012793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.012826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.013098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.013133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.013302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.013336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.603 [2024-12-16 02:58:12.013582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.603 [2024-12-16 02:58:12.013620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.603 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.013836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.013877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.014005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.014038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.014227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.014259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.014403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.014435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.014615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.014647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.014891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.014926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.015054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.015086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.015209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.015243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.015361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.015395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.015653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.015686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.015886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.015920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.016092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.016125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.016315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.016348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.016597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.016630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.016841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.016905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.017049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.017083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.017271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.017303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.017543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.017576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.017758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.017790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.017989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.018024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.018291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.018324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.018444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.018477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.018699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.018731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.018996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.019031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.019166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.019198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.019369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.019400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.019619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.019658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.019842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.019887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.020065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.020097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.020281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.020315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.020501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.020533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.020743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.020776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.020908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.020942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.021138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.021170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.021409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.021442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.021574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.021607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.021742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.021774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.022034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.022068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.022255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.604 [2024-12-16 02:58:12.022288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.604 qpair failed and we were unable to recover it. 00:36:41.604 [2024-12-16 02:58:12.022479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.022512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.022718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.022753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.022874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.022908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.023100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.023134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.023311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.023344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.023517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.023550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.023764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.023796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.024070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.024105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.024302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.024334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.024544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.024578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.024764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.024796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.024949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.024983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.025166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.025198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.025325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.025357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.025539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.025572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.025756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.025790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.026059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.026093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.026248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.026281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.026467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.026500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.026760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.026793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.026929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.026964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.027082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.027115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.027375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.027407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.027582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.027616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.027805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.027838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.028156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.028190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.028373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.028405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.028533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.028566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.028693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.028731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.028888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.028923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.029030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.029063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.029167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.029198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.029436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.029469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.029653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.029685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.029881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.029916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.030090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.030122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.030310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.030344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.030535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.030567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.030830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.030880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.605 [2024-12-16 02:58:12.031091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.605 [2024-12-16 02:58:12.031125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.605 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.031258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.031291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.031479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.031512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.031643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.031676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.031877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.031912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.032047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.032081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.032343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.032377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.032578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.032611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.032812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.032845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.033081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.033115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.033373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.033407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.033611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.033644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.033944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.033979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.034196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.034229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.034349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.034382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.034555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.034587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.034715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.034758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.034923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.034958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.035194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.035227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.035466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.035500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.035743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.035776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.035990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.036024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.036217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.036250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.036432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.036465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.036726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.036759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.036954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.036988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.037112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.037144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.037350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.037383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.037585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.037619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.037828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.037881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.038127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.038162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.038342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.038375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.038547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.038579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.038688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.038719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.038944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.038979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.039183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.039216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.039401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.039434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.039559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.039595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.039841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.039883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.040060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.606 [2024-12-16 02:58:12.040094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.606 qpair failed and we were unable to recover it. 00:36:41.606 [2024-12-16 02:58:12.040321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.040354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.040563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.040596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.040736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.040768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.040899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.040933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.041117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.041150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.041268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.041302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.041523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.041557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.041725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.041758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.041931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.041965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.042180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.042213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.042486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.042520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.042692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.042726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.042909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.042945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.043073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.043106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.043295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.043328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.043609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.043642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.043902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.043937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.044080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.044119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.044255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.044288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.044456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.044490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.044667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.044699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.044891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.044927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.045111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.045144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.045325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.045358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.045548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.045581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.045886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.045921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.046111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.046144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.046269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.046303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.046512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.046544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.046724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.046757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.047015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.047050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.047248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.047282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.047427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.047460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.047744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.047776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.047970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.607 [2024-12-16 02:58:12.048004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.607 qpair failed and we were unable to recover it. 00:36:41.607 [2024-12-16 02:58:12.048208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.048241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.048374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.048406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.048601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.048635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.048760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.048793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.048995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.049030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.049257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.049291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.049529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.049562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.049757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.049791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.050049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.050083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.050266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.050299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.050440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.050473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.050673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.050706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.051002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.051037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.051168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.051200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.051440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.051473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.051709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.051741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.052010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.052045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.052173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.052206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.052342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.052377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.052611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.052643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.052903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.052955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.053251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.053285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.053477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.053510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.053718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.053751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.053890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.053924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.054058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.054092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.054283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.054316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.054557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.054590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.054760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.054792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.054999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.055035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.055207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.055240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.055424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.055457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.055743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.055776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.055967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.056002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.056288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.056321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.056605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.056637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.056897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.056931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.057132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.057166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.057357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.057391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.608 [2024-12-16 02:58:12.057567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.608 [2024-12-16 02:58:12.057600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.608 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.057836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.057884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.058013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.058047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.058186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.058219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.058488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.058521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.058726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.058760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.058953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.058987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.059243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.059277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.059399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.059432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.059571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.059604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.059729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.059763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.059940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.059982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.060119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.060153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.060360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.060393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.060687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.060722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.060914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.060949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.061212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.061246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.061397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.061431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.061700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.061733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.061912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.061947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.062161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.062194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.062375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.062408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.062584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.062617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.062740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.062773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.062990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.063024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.063207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.063240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.063547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.063579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.063839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.063887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.064026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.064059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.064302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.064335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.064608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.064641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.064825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.064869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.065058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.065091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.065224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.065257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.065466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.065500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.065709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.065743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.065930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.065965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.066101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.066134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.066325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.066358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.066511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.609 [2024-12-16 02:58:12.066544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.609 qpair failed and we were unable to recover it. 00:36:41.609 [2024-12-16 02:58:12.066728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.066762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.066960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.066994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.067179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.067212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.067391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.067425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.067754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.067788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.068035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.068070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.068214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.068248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.068543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.068575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.068765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.068798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.069034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.069069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.069214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.069247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.069392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.069425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.069555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.069589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.069762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.069794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.070054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.070089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.070282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.070315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.070510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.070543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.070660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.070692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.070810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.070842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.071049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.071083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.071278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.071311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.071625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.071658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.071843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.071888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.072074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.072108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.072291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.072323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.072508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.072541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.072803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.072837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.073077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.073111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.073243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.073275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.073479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.073512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.073793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.073826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.074055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.074089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.074281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.074314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.074526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.074558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.074769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.074802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.075056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.075091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.075227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.075259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.075546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.075579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.075772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.610 [2024-12-16 02:58:12.075805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.610 qpair failed and we were unable to recover it. 00:36:41.610 [2024-12-16 02:58:12.075941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.075981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.076264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.076298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.076506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.076540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.076804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.076836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.077043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.077078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.077280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.077314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.077506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.077539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.077802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.077835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.077974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.078007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.078203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.078236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.078445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.078479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.078610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.078642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.078893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.078929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.079193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.079228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.079369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.079403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.079615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.079649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.079842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.079886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.079995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.080025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.080266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.080299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.080482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.080514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.080724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.080757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.080951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.080986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.081248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.081282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.081481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.081514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.081641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.081674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.081873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.081907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.082034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.082067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.082207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.082240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.082500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.082534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.082715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.082749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.083007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.083043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.083227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.083260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.083502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.083535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.083658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.083691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.083800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.611 [2024-12-16 02:58:12.083833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.611 qpair failed and we were unable to recover it. 00:36:41.611 [2024-12-16 02:58:12.084127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.084162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.084441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.084473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.084677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.084711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.084978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.085012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.085213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.085247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.085392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.085425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.085666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.085704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.085888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.085923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.086051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.086082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.086345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.086380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.086518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.086552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.086675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.086707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.086917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.086952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.087083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.087116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.087347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.087380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.087588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.087621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.087804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.087837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.088029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.088062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.088291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.088324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.088453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.088487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.088603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.088637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.088816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.088860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.088996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.089030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.089223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.089255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.089379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.089412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.089652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.089686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.089972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.090006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.090280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.090312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.090610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.090643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.090835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.090883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.091080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.091113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.091338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.091371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.091506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.091539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.091802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.091842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.092033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.092067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.092275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.092310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.092440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.612 [2024-12-16 02:58:12.092473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.612 qpair failed and we were unable to recover it. 00:36:41.612 [2024-12-16 02:58:12.092739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.092772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.092953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.092988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.093250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.093283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.093416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.093448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.093593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.093626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.094069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.094108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.094302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.094336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.094660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.094694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.094956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.094992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.095198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.095231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.095481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.095515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.095756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.095789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.095978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.096014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.096145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.096178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.096365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.096398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.096594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.096627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.096878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.096914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.097129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.097162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.097314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.097348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.097594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.097626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.097838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.097884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.098034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.098068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.098256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.098290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.098482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.098515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.098766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.098800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.099019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.099054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.099168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.099202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.099321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.099354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.099538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.099571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.099747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.099781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.099939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.099974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.100174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.100209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.100354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.100386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.100639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.100672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.613 qpair failed and we were unable to recover it. 00:36:41.613 [2024-12-16 02:58:12.100922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.613 [2024-12-16 02:58:12.100958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.101150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.101183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.101400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.101434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.101618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.101657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.101845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.101890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.102088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.102122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.102319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.102352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.102636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.102668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.102802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.102835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.102989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.103025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.103229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.103261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.103449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.103482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.103673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.103707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.103893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.103928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.104077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.104111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.104340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.104373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.104617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.104650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.104869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.104905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.105102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.105136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.105333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.105367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.105543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.105575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.105756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.105789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.106021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.106058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.106191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.106224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.106407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.106441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.106645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.106679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.106978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.107014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.107128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.107160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.107349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.107381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.107599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.107633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.107904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.107946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.108111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.108147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.108346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.108379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.108570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.108603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.108842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.108888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.109075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.614 [2024-12-16 02:58:12.109109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.614 qpair failed and we were unable to recover it. 00:36:41.614 [2024-12-16 02:58:12.109352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.109385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.109586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.109619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.109893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.109927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.110189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.110222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.110559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.110591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.110870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.110905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.111060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.111094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.111279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.111312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.111442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.111476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.111670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.111704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.111950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.111985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.112179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.112211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.112341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.112374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.112673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.112706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.112973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.113008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.113142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.113176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.113379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.113412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.113627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.113660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.113876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.113912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.114246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.114279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.114487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.114519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.114788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.114822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.115035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.115071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.115211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.115245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.115490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.115523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.115715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.115748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.115965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.116001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.116154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.116187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.116479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.116512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.116660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.116693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.116918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.116954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.117201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.117234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.117415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.117448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.117640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.117674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.615 qpair failed and we were unable to recover it. 00:36:41.615 [2024-12-16 02:58:12.117861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.615 [2024-12-16 02:58:12.117895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.118161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.118202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.118482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.118516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.118716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.118750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.118981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.119018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.119266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.119301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.119488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.119522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.119734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.119769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.119913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.119961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.120108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.120142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.120418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.120452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.120627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.120662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.120842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.120889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.121069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.121103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.121232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.121266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.121552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.121586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.121778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.121812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.122077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.122112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.122372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.122405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.122677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.122710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.122909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.122945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.123125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.123158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.123393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.123427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.123631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.123665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.123937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.123972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.124161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.124195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.124377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.124410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.124605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.124645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.124821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.124878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.125135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.125169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.125449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.125483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.616 qpair failed and we were unable to recover it. 00:36:41.616 [2024-12-16 02:58:12.125699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.616 [2024-12-16 02:58:12.125733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.125999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.126034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.126234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.126267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.126535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.126568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.126783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.126818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.127025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.127059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.127305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.127338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.127538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.127571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.127815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.127861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.127989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.128023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.128207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.128241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.128368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.128403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.128697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.128731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.128918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.128955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.129149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.129183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.129438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.129472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.129761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.129795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.129965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.130002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.130203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.130236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.130367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.130400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.130617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.130652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.130841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.130885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.131072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.131105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.131290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.131323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.131570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.131602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.131916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.131952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.132206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.132240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.132444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.132477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.132674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.132707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.132974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.133011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.133201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.133236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.133614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.133647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.133928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.133965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.134107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.617 [2024-12-16 02:58:12.134141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.617 qpair failed and we were unable to recover it. 00:36:41.617 [2024-12-16 02:58:12.134342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.134374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.134592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.134626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.134921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.134957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.135100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.135134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.135366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.135405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.135653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.135687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.135909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.135944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.136084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.136117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.136262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.136295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.136501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.136534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.136764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.136797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.137089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.137123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.137314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.137349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.137609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.137642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.137889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.137924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.138180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.138215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.138511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.138545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.138808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.138842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.139110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.139145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.139357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.139392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.139588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.139621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.139885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.139920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.140133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.140167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.140324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.140357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.140567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.140601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.140786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.140820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.141039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.141073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.141213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.141248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.141372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.141406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.141687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.141721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.141860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.141896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.142088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.142122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.142379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.142413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.142710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.142745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.143043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.143079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.618 [2024-12-16 02:58:12.143293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.618 [2024-12-16 02:58:12.143326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.618 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.143650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.143684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.143979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.144013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.144225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.144259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.144399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.144432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.144653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.144687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.144858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.144893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.145142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.145176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.145389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.145423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.145717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.145751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.145970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.146006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.146145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.146179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.146478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.146512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.146771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.146804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.147036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.147072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.147275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.147309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.147575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.147610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.147873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.147909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.148117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.148151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.148290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.148324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.148546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.148581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.148835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.148882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.149112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.149146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.149411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.149445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.149599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.149634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.149933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.149970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.150221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.150255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.150447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.150482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.150678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.150713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.150899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.150935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.151184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.151218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.151351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.151385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.151729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.151763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.151902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.151938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.152150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.619 [2024-12-16 02:58:12.152183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.619 qpair failed and we were unable to recover it. 00:36:41.619 [2024-12-16 02:58:12.152306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.152339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.152574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.152608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.152871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.152913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.153051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.153085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.153235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.153270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.153421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.153457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.153644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.153677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.153950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.153985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.154185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.154220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.154418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.154452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.154633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.154667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.154976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.155012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.155268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.155302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.155553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.155589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.155897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.155934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.156142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.156176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.156448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.156483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.156759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.156793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.156974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.157009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.157281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.157314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.157548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.157583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.157763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.157797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.158070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.158106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.158263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.158298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.158547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.158581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.158862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.158899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.159037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.159070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.159327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.159361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.159646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.159681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.159963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.160000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.160147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.160182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.160291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.160326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.160525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.160559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.160775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.160809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.161044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.161078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.161351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.161389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.161684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.161718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.161922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.620 [2024-12-16 02:58:12.161958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.620 qpair failed and we were unable to recover it. 00:36:41.620 [2024-12-16 02:58:12.162153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.162186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.162387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.162421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.162703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.162737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.162933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.162970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.163224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.163258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.163464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.163504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.163707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.163741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.163900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.163936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.164096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.164130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.164271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.164307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.164565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.164599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.164797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.164834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.165056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.165091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.165322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.165356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.165500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.165535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.165723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.165757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.166000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.166037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.166179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.166213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.166355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.166389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.168091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.168155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.168470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.168507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.168789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.168825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.169015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.169050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.169204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.169238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.169412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.169446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.169704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.169738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.169885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.169922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.170181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.170215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.170409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.170443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.170690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.170726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.170919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.170957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.621 [2024-12-16 02:58:12.171183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.621 [2024-12-16 02:58:12.171219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.621 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.171378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.171422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.171706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.171740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.172030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.172066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.172217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.172253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.172389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.172424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.172709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.172744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.172936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.172972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.173122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.173156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.173304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.173340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.173621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.173657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.173885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.173922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.174071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.174107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.174312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.174346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.174628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.174663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.174974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.175011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.175222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.175257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.175394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.175428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.175718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.175752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.175978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.176015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.176161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.176196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.176352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.176387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.176545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.176580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.176817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.176863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.177069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.177103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.177299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.177332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.177462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.177497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.177694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.177728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.177957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.177994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.178202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.178237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.178441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.178476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.178608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.178642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.178909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.178945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.179220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.179255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.179462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.179497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.179776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.179811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.180026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.180062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.180289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.180324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.180437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.622 [2024-12-16 02:58:12.180472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.622 qpair failed and we were unable to recover it. 00:36:41.622 [2024-12-16 02:58:12.180687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.180721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.180920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.180958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.181217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.181251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.181548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.181588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.181787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.181823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.182047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.182083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.182237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.182272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.182424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.182458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.182739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.182774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.182987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.183024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.183280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.183315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.183550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.183584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.183902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.183939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.184099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.184134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.184284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.184319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.184605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.184639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.184903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.184939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.185263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.185297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.185588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.185623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.185900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.185936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.186154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.186190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.186493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.186527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.186669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.186703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.186942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.186980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.187281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.187317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.187524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.187559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.187706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.187742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.187944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.187980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.188118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.188153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.188353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.188387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.188536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.188576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.188807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.188843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.189050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.189085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.189283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.189318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.189505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.189539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.189755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.189789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.189999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.190036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.190231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.190265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.190460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.623 [2024-12-16 02:58:12.190495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.623 qpair failed and we were unable to recover it. 00:36:41.623 [2024-12-16 02:58:12.190674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.190709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.190833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.190878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.191084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.191119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.191260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.191295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.191419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.191453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.191665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.191701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.191887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.191924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.192059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.192093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.192293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.192327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.192437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.192471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.192616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.192651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.192764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.192796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.193017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.193054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.193167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.193201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.193402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.193437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.193567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.193602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.193730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.193763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.193942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.193978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.194197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.194233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.194372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.194408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.194611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.194645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.194806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.194840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.195039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.195075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.195194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.195228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.195419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.195453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.195713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.195749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.195976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.196012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.196139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.196173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.196381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.196416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.196538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.196573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.196694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.196729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.196869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.196905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.197111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.197152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.197291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.197326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.197511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.197546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.197655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.197689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.197834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.197882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.198017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.198053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.198243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.198278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.624 [2024-12-16 02:58:12.198476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.624 [2024-12-16 02:58:12.198510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.624 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.198620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.198656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.198863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.198900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.199036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.199071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.199256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.199290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.199497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.199531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.199721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.199756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.199947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.199985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.200177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.200224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.200346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.200380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.200558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.200593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.200791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.200826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.201084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.201121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.201318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.201352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.201633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.201666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.201810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.201859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.202145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.202180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.202307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.202341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.202525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.202560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.202761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.202796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.202928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.202971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.203122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.203157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.203367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.203402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.203585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.203620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.203734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.203769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.203957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.203995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.204207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.204241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.204433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.204468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.204663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.204697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.204824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.204870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.205182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.205218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.205377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.205411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.205682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.205717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.205866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.205900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.206026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.206060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.206252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.206287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.206426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.625 [2024-12-16 02:58:12.206460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.625 qpair failed and we were unable to recover it. 00:36:41.625 [2024-12-16 02:58:12.206579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.206613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.206800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.206834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.207104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.207139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.207271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.207304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.207526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.207558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.207687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.207720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.207864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.207899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.208117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.208150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.208283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.208317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.208449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.208482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.208644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.208678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.208875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.208911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.209024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.209059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.209175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.209208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.209345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.209380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.209570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.209604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.209722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.209755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.209882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.209917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.210108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.210141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.210258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.210291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.210425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.210459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.210577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.210622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.210841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.210887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.211034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.211068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.211262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.211302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.211417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.211452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.211651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.211684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.211807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.211840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.212041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.212076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.212301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.212334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.212524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.212558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.626 [2024-12-16 02:58:12.212745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.626 [2024-12-16 02:58:12.212779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.626 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.212922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.212958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.213208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.213242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.213367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.213400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.213533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.213566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.213787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.213821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.213960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.213994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.214220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.214255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.214364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.214398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.214544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.214578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.214694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.214728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.214845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.214891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.215023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.215058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.215196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.215229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.215364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.215397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.215586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.215620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.215769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.215803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.215954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.215989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.216198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.216231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.216421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.216454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.216592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.216626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.216749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.216783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.216929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.216964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.217082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.217116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.217227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.217261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.217448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.217481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.217621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.217655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.217764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.217798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.217937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.217971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.218117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.218152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.218372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.218406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.218526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.218559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.218685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.218719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.218991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.219027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.219210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.219288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.627 qpair failed and we were unable to recover it. 00:36:41.627 [2024-12-16 02:58:12.219430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.627 [2024-12-16 02:58:12.219468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.219582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.219617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.219811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.219845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.219987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.220021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.220202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.220237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.220364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.220409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.220619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.220651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.220780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.220814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.220945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.220980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.221083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.221116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.221365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.221399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.221521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.221554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.221727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.221771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.221970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.222005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.222132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.222165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.222414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.222449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.222556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.222589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.222718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.222751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.222883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.222918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.223100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.223133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.223244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.223277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.223490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.223523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.223721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.223754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.223878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.223911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.224042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.224074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.224285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.224319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.224443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.224477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.224585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.224618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.224781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.224813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.225095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.225130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.225243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.225276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.225401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.225434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.225613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.225647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.225755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.225788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.225975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.226010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.226317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.226351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.226465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.226498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.226685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.226718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.226837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.628 [2024-12-16 02:58:12.226881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.628 qpair failed and we were unable to recover it. 00:36:41.628 [2024-12-16 02:58:12.227132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.629 [2024-12-16 02:58:12.227174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.629 qpair failed and we were unable to recover it. 00:36:41.629 [2024-12-16 02:58:12.227305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.629 [2024-12-16 02:58:12.227339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.629 qpair failed and we were unable to recover it. 00:36:41.629 [2024-12-16 02:58:12.227482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.908 [2024-12-16 02:58:12.227516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.227647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.227680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.227894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.227931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.228067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.228102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.228332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.228364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.228574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.228607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.228825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.228870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.229012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.229045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.229314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.229348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.229488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.229520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.229717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.229751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.229902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.229937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.230153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.230186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.230313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.230346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.230600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.230634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.230863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.230898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.231097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.231131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.231266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.231300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.231432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.231466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.231647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.231680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.231888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.231926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.232071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.232104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.232251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.232283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.232556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.232589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.232837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.232881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.233076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.233116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.233261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.233294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.233563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.233596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.233844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.233890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.234102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.234135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.234277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.234311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.234527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.234560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.234809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.234842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.235006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.235047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.235164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.235195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.235385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.235418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.235633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.235667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.235981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.236017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.236218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.236251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.236392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.236426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.236679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.236712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.236916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.236951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.237164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.237196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.237378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.237411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.237603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.237636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.237916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.237951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.238144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.238177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.238305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.238338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.238566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.238599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.238815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.238856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.239015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.239050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.239267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.239301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.239437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.239470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.239662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.239696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.239897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.239933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.240131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.240165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.240378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.240412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.240619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.240652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.240815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.240859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.241065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.241098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.241300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.241334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.241526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.241559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.241774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.241807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.242051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.242086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.242306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.242340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.242575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.242608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.242863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.242903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.243053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.243087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.243289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.243322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.243663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.243696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.243927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.243963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.244102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.244135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.244349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.244383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.244599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.244633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.244836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.244882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.245133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.245167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.245317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.245351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.245576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.245611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.245754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.245787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.245933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.245968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.246107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.246140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.246269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.246303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.246432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.246465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.246675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.246708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.246977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.247012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.247144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.247177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.247399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.247432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.247772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.247806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.248033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.248068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.248255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.248288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.248423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.248456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.248650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.248685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.248888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.248924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.249082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.249121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.249302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.249335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.249612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.249645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.249901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.249935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.250158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.250191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.250311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.250344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.250478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.250524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.250748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.250782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.250994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.251029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.251196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.251230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.251423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.251456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.251660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.251693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.251914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.251950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.252097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.252130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.252189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdadc70 (9): Bad file descriptor 00:36:41.909 [2024-12-16 02:58:12.252741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.252821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.253146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.909 [2024-12-16 02:58:12.253185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.909 qpair failed and we were unable to recover it. 00:36:41.909 [2024-12-16 02:58:12.253349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.253384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.253598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.253632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.253884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.253921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.254126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.254160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.254314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.254349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.254610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.254644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.254844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.254886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.255088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.255122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.255383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.255417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.255602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.255636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.255863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.255899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.256161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.256198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.256405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.256440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.256705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.256739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.257048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.257083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.257281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.257315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.257578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.257612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.257804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.257838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.258040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.258075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.258272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.258306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.258603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.258637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.258828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.258873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.259026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.259060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.259215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.259248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.259484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.259525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.259828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.259873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.260078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.260112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.260245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.260280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.260412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.260448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.260705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.260739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.260996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.261032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.261330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.261364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.261636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.261670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.261949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.261985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.262251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.262285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.262578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.262612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.262823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.262866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.263125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.263159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.263359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.263394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.263667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.263701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.263831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.263871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.264152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.264186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.264341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.264375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.264638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.264671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.264808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.264842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.264994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.265029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.265311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.265346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.265567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.265600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.265884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.265920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.266075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.266109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.910 qpair failed and we were unable to recover it. 00:36:41.910 [2024-12-16 02:58:12.266260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.910 [2024-12-16 02:58:12.266294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.266577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.266611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.266898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.266934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.267140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.267174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.267358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.267391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.267589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.267622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.267942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.267978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.268183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.268217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.268414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.268448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.268728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.268761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.269048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.269084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.269302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.269336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.269544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.269578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.269888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.269924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.270136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.270176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.270435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.270470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.270616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.270650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.270926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.270961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.271269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.271303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.271500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.271533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.271727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.271761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.271964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.272000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.272204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.272238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.272375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.272409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.272629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.272663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.272856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.272891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.273021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.273055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.273309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.273343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.273631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.273666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.273809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.273843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.274066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.274102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.274331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.274365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.274572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.274605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.274808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.274842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.275039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.275074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.275299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.275332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.911 [2024-12-16 02:58:12.275568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.911 [2024-12-16 02:58:12.275602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.911 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.275816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.275858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.276112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.276146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.276362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.276395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.276659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.276692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.276981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.277017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.277167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.277201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.277502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.277537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.277793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.277827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.278000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.278034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.278246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.278279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.278533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.278567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.278841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.278890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.279094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.279129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.279334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.279368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.279587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.279621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.279811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.279846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.280125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.280159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.280302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.280343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.280649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.280684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.280883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.280918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.281055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.281088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.281297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.281331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.281648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.281682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.281952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.281987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.282185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.282220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.282429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.282463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.282666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.282700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.282936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.282972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.283193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.283228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.283421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.283455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.283639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.283673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.283808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.283843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.284049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.284083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.284232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.284267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.284387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.284418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.284603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.284637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.284828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.284873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.285083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.285118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.285249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.285283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.285502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.285536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.285737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.285771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.285969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.286004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.286221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.286255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.912 [2024-12-16 02:58:12.286552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.912 [2024-12-16 02:58:12.286586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.912 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.286878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.286916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.287125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.287158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.287314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.287347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.287549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.287583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.287715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.287747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.287979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.288015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.288165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.288198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.288401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.288434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.288711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.288744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.289012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.289047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.289195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.289229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.289426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.289460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.289762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.289795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.289938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.289980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.290169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.290204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.290330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.290364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.290584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.290619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.290872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.290907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.291166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.291200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.291337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.291370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.291690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.291724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.292003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.292037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.292244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.292277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.292476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.292511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.292655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.292688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.292874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.292909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.293099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.293132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.293388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.293423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.293620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.293653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.293935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.293970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.294184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.294218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.294378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.294412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.294608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.294642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.294936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.294972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.295162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.295195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.295325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.295358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.295664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.295698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.295885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.295920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.296128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.296162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.296293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.296326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.296585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.296619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.296900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.296935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.297129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.297163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.297297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.297331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.297553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.913 [2024-12-16 02:58:12.297586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.913 qpair failed and we were unable to recover it. 00:36:41.913 [2024-12-16 02:58:12.297860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.297895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.298124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.298158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.298359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.298392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.298659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.298693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.298910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.298946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.299153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.299186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.299387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.299420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.299695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.299728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.299952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.299993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.300181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.300215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.300419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.300452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.300724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.300758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.301048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.301084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.301357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.301391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.301698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.301731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.301933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.301968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.302107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.302140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.302332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.302366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.302510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.302543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.302855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.302889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.303172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.303207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.303405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.303439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.303647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.303682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.303894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.303929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.304089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.304124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.304397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.304431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.304707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.304740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.305045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.305081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.305375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.305408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.305664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.305698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.305903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.305938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.306129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.306161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.306352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.306385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.306668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.306701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.306908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.306943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.307269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.307304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.307537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.307572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.307836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.307891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.308165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.308199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.308526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.308559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.308815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.308860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.309056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.309092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.309298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.309332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.309550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.309583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.309726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.309761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.310029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.914 [2024-12-16 02:58:12.310065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.914 qpair failed and we were unable to recover it. 00:36:41.914 [2024-12-16 02:58:12.310191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.310226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.310368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.310402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.310600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.310640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.310953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.310988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.311129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.311163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.311369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.311403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.311674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.311707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.311857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.311893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.312026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.312060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.312249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.312283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.312481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.312514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.312713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.312747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.312968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.313004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.313220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.313253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.313391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.313425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.313638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.313672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.313883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.313920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.314063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.314096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.314253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.314286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.314568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.314601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.314875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.314911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.315102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.315135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.315328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.315362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.315563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.315597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.315795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.315828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.316150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.316185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.316316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.316350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.316645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.316680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.316945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.316980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.317225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.317297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.317639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.317678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.317961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.317998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.318270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.318306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.318538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.318572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.318765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.318795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.319010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.319046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.319243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.319277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.319544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.319579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.319795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.319830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.320043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.320078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.320224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.320255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.915 qpair failed and we were unable to recover it. 00:36:41.915 [2024-12-16 02:58:12.320540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.915 [2024-12-16 02:58:12.320574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.320857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.320904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.321094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.321128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.321331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.321366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.321685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.321722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.321997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.322032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.322310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.322345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.322483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.322518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.322823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.322873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.323074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.323107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.323375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.323409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.323674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.323709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.323906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.323943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.324099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.324133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.324358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.324393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.324653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.324687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.324967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.325002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.325206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.325239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.325482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.325516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.325741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.325776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.325951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.325987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.326239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.326272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.326476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.326509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.326706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.326740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.326882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.326920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.327139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.327173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.327373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.327407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.327648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.327683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.327891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.327928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.328067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.328101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.328246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.328280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.328431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.328465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.328662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.328697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.328917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.328954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.329213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.329247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.329519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.329554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.329681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.329715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.329977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.330012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.330262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.330296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.330624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.330658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.330913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.330949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.331143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.331184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.331408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.331442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.331695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.331729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.916 [2024-12-16 02:58:12.331918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.916 [2024-12-16 02:58:12.331955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.916 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.332170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.332206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.332342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.332375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.332517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.332550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.332756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.332790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.333020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.333056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.333328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.333363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.333587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.333622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.333759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.333792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.334008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.334044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.334252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.334285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.334471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.334505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.334693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.334727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.334930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.334967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.335249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.335284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.335486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.335520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.335740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.335774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.335982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.336020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.336142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.336176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.336379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.336413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.336609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.336644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.336898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.336934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.337213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.337247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.337461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.337495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.337622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.337656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.337874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.337910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.338045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.338079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.338267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.338301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.338598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.338632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.338813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.338858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.339063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.339097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.339293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.339327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.339601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.339635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.339819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.339860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.340127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.340162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.340430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.340464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.340665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.340700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.340954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.340997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.341202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.341236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.341494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.341529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.341808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.341841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.342056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.342091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.342375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.342409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.342706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.342741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.343009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.343044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.343200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.343234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.343427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.917 [2024-12-16 02:58:12.343460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.917 qpair failed and we were unable to recover it. 00:36:41.917 [2024-12-16 02:58:12.343726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.343760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.344055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.344091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.344378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.344412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.344600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.344635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.344899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.344936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.345231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.345264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.345469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.345503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.345780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.345814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.346095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.346130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.346409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.346444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.346729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.346763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.347042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.347078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.347357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.347391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.347672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.347706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.347926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.347961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.348243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.348277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.348551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.348585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.348872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.348908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.349186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.349220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.349423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.349458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.349648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.349682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.349940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.349975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.350225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.350259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.350510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.350544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.350854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.350889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.351144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.351179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.351461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.351495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.351625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.351659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.351918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.351955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.352102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.352136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.352321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.352360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.352630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.352665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.352931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.352968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.353118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.353152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.353407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.353442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.353740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.353774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.354054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.354089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.354314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.354348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.354613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.354648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.354835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.354879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.355083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.355118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.355373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.355409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.355614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.355648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.355910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.355946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.918 [2024-12-16 02:58:12.356214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.918 [2024-12-16 02:58:12.356248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.918 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.356517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.356552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.356736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.356771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.357041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.357077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.357284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.357318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.357507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.357541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.357743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.357776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.358054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.358089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.358297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.358332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.358531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.358564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.358781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.358815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.359049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.359084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.359225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.359258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.359592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.359670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.359838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.359894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.360186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.360221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.360501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.360537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.360749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.360783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.361021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.361057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.361259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.361294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.361555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.361589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.361776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.361809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.362119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.362154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.362382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.362415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.362618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.362652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.362778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.362812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.363134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.363178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.363330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.363365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.363591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.363627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.363822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.363869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.364152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.364189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.364485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.364519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.364795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.364830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.365036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.365073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.365268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.365302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.365524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.365557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.365811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.365845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.366134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.366169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.366376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.366411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.366626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.366659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.366885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.366923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.367211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.367245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.367540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.367574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.367791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.367825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.368135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.368171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.368393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.368427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.368689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.368726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.368997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.369036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.369258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.369296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.369580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.369621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.919 [2024-12-16 02:58:12.369843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.919 [2024-12-16 02:58:12.369894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.919 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.370114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.370152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.370358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.370393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.370717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.370759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.370967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.371002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.371201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.371236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.371497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.371532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.371843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.371889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.372097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.372132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.372332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.372367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.372630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.372665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.372819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.372862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.372993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.373028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.373225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.373258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.373542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.373577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.373869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.373904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.374122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.374156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.374322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.374357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.374647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.374682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.374957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.374993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.375207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.375242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.375388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.375423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.375632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.375667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.375869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.375904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.376045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.376080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.376395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.376429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.376645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.376679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.377002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.377038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.377313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.377348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.377629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.377665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.377887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.377924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.378139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.378174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.378433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.378467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.378764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.378800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.379090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.379125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.379346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.379380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.379650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.379683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.379895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.379931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.380190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.380225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.380491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.380525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.380730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.380765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.380979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.381015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.381202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.381235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.381444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.381485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.381760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.381795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.382078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.382113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.382394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.382429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.382708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.382742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.383020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.383055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.383190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.383225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.383340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.383376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.383665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.383701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.920 [2024-12-16 02:58:12.384000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.920 [2024-12-16 02:58:12.384035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.920 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.384232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.384267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.384478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.384513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.384762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.384797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.385012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.385047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.385238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.385273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.385458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.385492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.385704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.385738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.385942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.385978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.386202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.386237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.386421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.386456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.386635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.386670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.386925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.386961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.387236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.387270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.387499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.387533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.387783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.387817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.388029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.388064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.388263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.388298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.388568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.388604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.388736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.388770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.389022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.389057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.389332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.389366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.389585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.389620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.389844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.389887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.390141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.390176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.390411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.390445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.390707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.390741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.391028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.391064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.391324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.391359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.391636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.391670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.391972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.392008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.392269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.392310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.392597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.392633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.392845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.392887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.393025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.393059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.393243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.393277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.393501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.393535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.393729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.393764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.394027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.394062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.394209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.394242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.394439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.394473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.394749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.394785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.394912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.394948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.395160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.395194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.395488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.395522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.395782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.395817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.395987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.396022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.396296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.396331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.396604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.396639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.396931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.396971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.397237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.397271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.397524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.921 [2024-12-16 02:58:12.397558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.921 qpair failed and we were unable to recover it. 00:36:41.921 [2024-12-16 02:58:12.397765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.397799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.398128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.398164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.398393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.398426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.398678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.398713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.399029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.399065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.399321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.399354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.399603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.399638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.399912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.399948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.400155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.400189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.400345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.400380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.400505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.400539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.400789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.400824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.401102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.401138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.401476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.401510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.401734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.401768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.402002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.402039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.402255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.402289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.402480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.402514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.402793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.402827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.403113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.403154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.403426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.403460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.403763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.403798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.404069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.404104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.404250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.404285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.404551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.404586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.404838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.404885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.405033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.405067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.405257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.405292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.405497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.405530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.405783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.405818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.406050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.406086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.406290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.406324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.406569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.406603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.406937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.406973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.407224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.407259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.407398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.407433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.407719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.407753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.408037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.408072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.408271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.408306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.408622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.408657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.408866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.408902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.409093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.409128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.409344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.409379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.409697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.409732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.409982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.410018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.410278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.410312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.410600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.410635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.410937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.410974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.411171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.411205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.922 [2024-12-16 02:58:12.411403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.922 [2024-12-16 02:58:12.411437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.922 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.411619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.411653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.411924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.411960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.412141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.412175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.412446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.412481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.412612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.412647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.412843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.412889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.413089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.413124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.413321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.413354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.413575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.413609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.413894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.413936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.414075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.414110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.414310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.414344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.414564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.414598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.414864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.414901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.415214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.415248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.415524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.415559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.415744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.415778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.416046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.416082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.416289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.416323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.416598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.416633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.416910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.416946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.417232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.417266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.417395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.417429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.417688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.417723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.418003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.418039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.418258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.418293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.418428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.418461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.418663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.418695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.418834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.418878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.419066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.419102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.419375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.419410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.419662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.419697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.419990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.420026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.420291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.420326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.420609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.420643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.420867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.420903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.421112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.421149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.421424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.421459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.421655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.421689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.421912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.421948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.422225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.422259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.422465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.422500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.422736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.422770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.423050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.423085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.423363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.423398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.423610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.423644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.423898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.423935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.424188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.424222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.424521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.424557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.424768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.424808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.425096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.425131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.425425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.425459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.425751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.425786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.426055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.923 [2024-12-16 02:58:12.426090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.923 qpair failed and we were unable to recover it. 00:36:41.923 [2024-12-16 02:58:12.426276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.426310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.426457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.426491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.426685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.426719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.426943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.426978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.427110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.427145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.427455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.427489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.427741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.427775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.428008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.428044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.428299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.428333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.428617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.428652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.428904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.428939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.429214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.429248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.429502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.429536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.429795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.429830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.429976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.430011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.430193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.430227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.430528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.430563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.430886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.430923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.431069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.431104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.431301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.431335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.431455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.431489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.431721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.431754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.431984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.432019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.432167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.432201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.432494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.432528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.432819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.432863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.433125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.433159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.433377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.433412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.433689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.433724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.434025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.434061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.434319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.434354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1210077 Killed "${NVMF_APP[@]}" "$@" 00:36:41.924 [2024-12-16 02:58:12.434657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.434694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.434823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.434868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.435150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.435185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:41.924 [2024-12-16 02:58:12.435416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.435460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.435680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.435718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:41.924 [2024-12-16 02:58:12.435962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.435999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:41.924 [2024-12-16 02:58:12.436278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.436316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.436438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.436475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.436748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.436784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.436987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.437023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.437296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.437331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.437549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.437584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.437844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.437892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.438185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.438220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.438478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.438511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.438758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.438794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.439036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.439071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.439397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.439432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.439664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.439700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.439896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.439932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.440134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.440169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.440431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.924 [2024-12-16 02:58:12.440465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.924 qpair failed and we were unable to recover it. 00:36:41.924 [2024-12-16 02:58:12.440729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.440761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.440959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.440997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.441267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.441302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.441443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.441477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.441774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.441808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.442024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.442059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.442260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.442301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.442583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.442618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.442870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.442906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.443178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.443213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.443435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.443469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.443679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.443715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1210775 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.443970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.444005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1210775 00:36:41.925 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:41.925 [2024-12-16 02:58:12.444262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.444300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.444508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1210775 ']' 00:36:41.925 [2024-12-16 02:58:12.444544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.444747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.444782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:41.925 [2024-12-16 02:58:12.445058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.445096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:41.925 [2024-12-16 02:58:12.445284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.445322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:41.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:41.925 [2024-12-16 02:58:12.445515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.445557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.445761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:41.925 [2024-12-16 02:58:12.445797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.446069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:41.925 [2024-12-16 02:58:12.446107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.446393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.446428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.446636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.446670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.446788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.446823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.447042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.447077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.447404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.447439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.447708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.447744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.447973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.448009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.448227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.448270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.448549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.448583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.448868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.448903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.449050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.449089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.449310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.449345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.449568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.449602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.449864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.449900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.450139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.450174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.450430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.450464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.450657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.450692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.450918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.450953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.451091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.451126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.451416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.451450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.451706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.451741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.451967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.452005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.452206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.452241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.452442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.452476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.452756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.452790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.925 qpair failed and we were unable to recover it. 00:36:41.925 [2024-12-16 02:58:12.453055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.925 [2024-12-16 02:58:12.453091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.453232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.453267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.453562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.453597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.453823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.453868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.454152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.454190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.454388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.454423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.454628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.454664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.454788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.454823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.455045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.455081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.455218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.455252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.455452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.455485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.455779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.455816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.456111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.456148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.456350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.456385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.456603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.456638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.456822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.456870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.457013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.457047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.457270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.457304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.457494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.457528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.457782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.457816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.458073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.458111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.458334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.458370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.458669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.458710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.458905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.458943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.459165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.459200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.459404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.459439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.459622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.459657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.459912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.459947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.460137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.460171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.460363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.460397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.460592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.460626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.460902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.460938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.461134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.461168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.461441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.461478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.461761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.461795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.462076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.462113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.462394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.462429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.462648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.462682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.462937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.462973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.463172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.463207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.463478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.463512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.463657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.463691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.463885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.463919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.464172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.464209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.464391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.464425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.464573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.464609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.464813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.464857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.465073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.465106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.465305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.465339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.465644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.465681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.465877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.465913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.466110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.466144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.466342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.466374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.466580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.466613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.466824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.466872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.467127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.467161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.926 [2024-12-16 02:58:12.467413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.926 [2024-12-16 02:58:12.467446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.926 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.467645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.467679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.467802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.467836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.468046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.468081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.468275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.468315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.468597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.468631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.468777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.468817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.469035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.469070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.469262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.469296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.469491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.469525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.469641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.469673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.469891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.469927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.470062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.470097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.470288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.470321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.470600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.470635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.470887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.470922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.471180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.471214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.471404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.471438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.471570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.471603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.471739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.471772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.471921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.471956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.472076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.472111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.472366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.472399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.472545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.472578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.472698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.472733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.472948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.472983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.473096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.473130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.473425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.473460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.473647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.473684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.473946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.473982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.474202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.474237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.474494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.474528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.474734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.474768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.474969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.475049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.475267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.475309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.475505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.475541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.475741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.475774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.475991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.476026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.476241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.476275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.476533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.476567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.476764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.476808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.477032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.477067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.477320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.477353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.477588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.477624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.477831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.477878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.478009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.478042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.478228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.478263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.478533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.478567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.478839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.478888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.479021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.479053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.479292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.479325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.479453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.479487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.479690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.479723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.479917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.479953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.480208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.480241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.480373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.480408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.480599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.480633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.927 qpair failed and we were unable to recover it. 00:36:41.927 [2024-12-16 02:58:12.480840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.927 [2024-12-16 02:58:12.480884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.481029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.481065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.481293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.481328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.481466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.481509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.481705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.481739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.482041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.482077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.482330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.482365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.482497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.482531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.482658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.482689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.482883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.482918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.483064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.483099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.483229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.483260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.483380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.483414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.483608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.483642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.483823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.483866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.484053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.484087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.484218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.484253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.484391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.484425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.484640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.484674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.484865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.484901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.485178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.485212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.485350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.485384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.485636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.485671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.485805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.485839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.486007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.486042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.486187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.486222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.486497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.486532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.486739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.486782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.486894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.486927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.487063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.487096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.487350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.487391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.487588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.487623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.487815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.487862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.488050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.488085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.488320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.488355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.488603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.488637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.488888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.488925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.489119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.489153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.489435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.489470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.489595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.489629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.489838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.489882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.490155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.490189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.490326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.490360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.490489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.490523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.490754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.490829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.491115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.491193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.491436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.491473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.491779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.491813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.491955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.491991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.492177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.492211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.492401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.492435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.492569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.492603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.492822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.928 [2024-12-16 02:58:12.492867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.928 qpair failed and we were unable to recover it. 00:36:41.928 [2024-12-16 02:58:12.493061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.493096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.493370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.493404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.493579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.493612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.493897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.493934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.494142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.494188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.494382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.494416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.494624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.494660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.494873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.494908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.495109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.495118] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:41.929 [2024-12-16 02:58:12.495144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:41.929 [2024-12-16 02:58:12.495172] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:qpair failed and we were unable to recover it. 00:36:41.929 5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:41.929 [2024-12-16 02:58:12.495398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.495439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.495564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.495597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.495708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.495751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.495965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.495999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.496208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.496242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.496454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.496486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.496740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.496776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.496924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.496960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.497125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.497160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.497425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.497464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.497723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.497759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.497981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.498020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.498144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.498181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.498306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.498342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.498527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.498564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.498752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.498787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.498910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.498946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.499224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.499259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.499527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.499563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.499769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.499806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.499955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.499991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.500266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.500309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.500439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.500474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.500597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.500629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.500739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.500771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.500899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.500932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.501075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.501106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.501248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.501279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.501403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.501435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.501635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.501668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.501795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.501828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.502029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.502065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.502183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.502215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.502499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.502533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.502755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.502788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.503001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.503037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.503229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.503264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.503460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.503494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.503619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.503653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.503796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.503830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.504028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.504063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.504281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.504315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.504517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.504551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.504687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.504721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.504904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.504940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.505059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.505093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.505234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.505269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.505455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.505490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.929 qpair failed and we were unable to recover it. 00:36:41.929 [2024-12-16 02:58:12.505760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.929 [2024-12-16 02:58:12.505795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.505941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.505978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.506182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.506217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.506330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.506365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.506615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.506649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.506791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.506825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.507085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.507119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.507302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.507337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.507557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.507592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.507705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.507756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.507991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.508027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.508216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.508251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.508435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.508470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.508655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.508690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.508808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.508843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.509047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.509083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.509302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.509336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.509519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.509553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.509819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.509864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.509995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.510030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.510172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.510207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.510394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.510429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.510537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.510572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.510754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.510788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.510920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.510955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.511071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.511105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.511235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.511269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.511454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.511496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.511756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.511792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.511930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.511965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.512164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.512199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.512382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.512417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.512666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.512700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.512962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.512998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.513136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.513170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.513385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.513421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.513618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.513653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.513877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.513912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.514168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.514202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.514420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.514454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.514697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.514731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.514945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.514986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.515180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.515216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.515348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.515382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.515861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.515905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.516098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.516136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.516331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.516365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.516559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.516593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.516780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.516824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.516984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.517020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.517276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.517310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.517520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.517554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.517682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.517718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.517916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.517952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.518151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.518185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.518312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.518348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.518536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.518570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.518745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.518779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.518956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.518991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.519195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.519230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.519483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.519517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.930 qpair failed and we were unable to recover it. 00:36:41.930 [2024-12-16 02:58:12.519726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.930 [2024-12-16 02:58:12.519759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.519880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.519916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.520130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.520166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.520439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.520475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.520649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.520685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.520811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.520867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.521095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.521131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.521405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.521440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.521558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.521594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.521805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.521839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.522115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.522150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.522277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.522312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.522509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.522544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.522665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.522700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.522888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.522924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.523174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.523209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.523453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.523488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.523676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.523710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.523931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.523967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.524096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.524130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.524251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.524286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.524516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.524590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.524728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.524768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.524975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.525014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.525205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.525240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.525423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.525459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.525664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.525699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.525884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.525920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.526110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.526144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.526337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.526371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.526621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.526655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.526861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.526897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.527174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.527209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.527403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.527436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.527565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.527599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.527880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.527917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.528114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.528148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.528340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.528373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.528502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.528536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.528804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.528837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.528989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.529024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.529227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.529261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.529463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.529496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.529688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.529722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.529935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.529970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.530214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.530248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.530447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.530480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.530601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.530633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.530860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.530896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.531131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.531165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.531349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.531382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.531628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.531661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.531889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.531925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.532197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.532230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.532367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.532400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.532648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.931 [2024-12-16 02:58:12.532682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.931 qpair failed and we were unable to recover it. 00:36:41.931 [2024-12-16 02:58:12.532868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.532901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.533023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.533057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.533327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.533361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.533474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.533506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.533618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.533651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.533832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.533886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.534007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.534041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.534218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.534250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.534459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.534492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.534617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.534649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.534904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.534938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.535129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.535162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.535283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.535317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.535454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.535487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.535737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.535769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.535957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.535992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.536189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.536223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.536487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.536520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.536771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.536803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.536934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.536969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.537158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.537191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.537382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.537415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.537551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.537585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.537785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.537817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.538003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.538036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.538140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.538173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.538306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.538367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.538549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.538583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.538755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.538787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.538982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.539016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.539288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.539321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.539589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.539622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.539812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.539845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.540127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.540161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.540298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.540331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.540533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.540567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.540824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.540866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.541142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.541177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.541305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.541338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.541525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.541558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.541734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.541767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.541980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.542016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.542224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.542257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.542451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.542483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.542593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.542627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.542836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.542893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.543188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.543221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.543352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.543385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.543579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.543611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.543800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.543833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.544025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.544059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.544253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.544286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.544540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.544572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.544692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.544725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.544915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.544950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.545222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.545254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.545463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.545495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.545688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.545721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.545925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.545959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.546084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.546118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.546309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:41.932 [2024-12-16 02:58:12.546342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:41.932 qpair failed and we were unable to recover it. 00:36:41.932 [2024-12-16 02:58:12.546545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.546578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.546777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.546810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.546963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.546997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.547130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.547163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.547341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.547373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.547500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.547533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.547825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.547878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.548011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.548042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.548203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.548234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.548375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.548407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.548552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.548585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.548722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.548756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.548886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.548921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.549201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.549234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.549351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.549386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.549635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.549668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.549808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.549841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.263 qpair failed and we were unable to recover it. 00:36:42.263 [2024-12-16 02:58:12.550049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.263 [2024-12-16 02:58:12.550083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.550224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.550256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.550476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.550510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.550628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.550660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.550802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.550834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.550980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.551014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.551192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.551225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.551470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.551508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.551626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.551659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.551845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.551889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.552060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.552092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.552281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.552315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.552506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.552538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.552657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.552690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.552939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.552974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.553148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.553181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.553369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.553401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.553596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.553628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.553757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.553790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.554017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.554052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.554238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.554270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.554410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.554445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.554618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.554651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.554830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.554871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.555136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.555170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.555359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.555392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.555523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.555555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.555738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.555771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.555945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.555979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.556179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.556212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.556332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.556365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.556492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.556524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.556750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.556782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.556918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.556953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.557230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.557263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.557451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.557485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.557670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.557703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.557879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.557913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.558095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.264 [2024-12-16 02:58:12.558130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.264 qpair failed and we were unable to recover it. 00:36:42.264 [2024-12-16 02:58:12.558254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.558286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.558496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.558529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.558798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.558832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.558979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.559013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.559138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.559171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.559350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.559383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.559496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.559530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.559734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.559767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.559885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.559925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.560113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.560147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.560300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.560333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.560523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.560555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.560663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.560696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.560810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.560843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.561049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.561083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.561277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.561309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.561438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.561470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.561647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.561680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.561950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.561985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.562126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.562158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.562273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.562306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.562516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.562548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.562762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.562796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.562983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.563017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.563198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.563230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.563409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.563442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.563685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.563718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.563902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.563936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.564040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.564074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.564185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.564217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.564409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.564441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.564642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.564675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.564908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.564942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.565071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.565104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.565212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.565245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.565452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.565484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.565583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.565616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.565794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.565827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.565977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.265 [2024-12-16 02:58:12.566010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.265 qpair failed and we were unable to recover it. 00:36:42.265 [2024-12-16 02:58:12.566198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.566231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.566365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.566398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.566590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.566623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.566753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.566786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.567073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.567107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.567295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.567328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.567570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.567603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.567783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.567817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.567994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.568067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.568332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.568379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.568563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.568597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.568736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.568778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.568951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.568985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.569130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.569162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.569285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.569317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.569426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.569459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.569578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.569610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.569734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.569767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.570006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.570041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.570214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.570247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.570431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.570465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.570637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.570669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.570789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.570822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.570969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.571005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.571190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.571223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.571418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.571451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.571570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.571603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.571806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.571838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.572054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.572088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.572287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.572319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.572535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.572567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.572687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.572719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.572830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.572872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.573068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.573101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.573288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.573321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.573586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.573619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.573761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.573794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.574074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.574110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.266 [2024-12-16 02:58:12.574323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.266 [2024-12-16 02:58:12.574355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.266 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.574635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.574669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.574769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.574801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.575082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.575117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.575294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.575328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.575536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.575569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.575764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.575797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.575997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.576031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.576163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.576197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.576392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.576425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.576611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.576645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.576762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.576801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.577074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.577108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.577281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.577314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.577490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.577523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.577665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.577698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.577882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.577917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.578091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.578124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.578307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.578340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.578519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.578552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.578746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.578779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.578976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.579011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.579253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.579286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.579472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.579505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.579635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.579668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.579865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.579900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.580092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.580125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.580434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.580468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.580580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.580613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.580735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.580768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.580955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.580990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.581232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.581265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.581383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.581416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.581609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.581645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.581884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.581919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.582082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.267 [2024-12-16 02:58:12.582115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.267 qpair failed and we were unable to recover it. 00:36:42.267 [2024-12-16 02:58:12.582304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.582337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.582604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.582638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.582759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:42.268 [2024-12-16 02:58:12.582821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.582864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.583074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.583108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.583213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.583247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.583376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.583408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.583604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.583637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.583824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.583866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.583981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.584015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.584203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.584236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.584416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.584448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.584690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.584723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.584843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.584885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.585060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.585093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.585265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.585298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.585490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.585526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.585712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.585745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.585935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.585970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.586096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.586129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.586251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.586283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.586453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.586486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.586663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.586697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.586804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.586837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.587047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.587081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.587295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.587328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.587522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.587553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.587659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.587690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.587816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.587858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.588074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.588114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.588227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.588259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.588376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.588409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.588522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.588554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.588825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.588870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.589110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.589143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.589394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.589428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.268 [2024-12-16 02:58:12.589558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.268 [2024-12-16 02:58:12.589591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.268 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.589772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.589804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.590076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.590114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.590254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.590288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.590531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.590565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.590702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.590736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.590930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.590966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.591237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.591271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.591459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.591493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.591731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.591766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.591959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.591995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.592200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.592234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.592448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.592481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.592671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.592705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.592844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.592889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.593008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.593041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.593162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.593196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.593330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.593363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.593504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.593537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.593714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.593749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.593942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.594020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.594159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.594196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.594411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.594446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.594625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.594659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.594860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.594896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.595030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.595064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.595247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.595281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.595486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.595520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.595622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.595652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.595862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.595897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.596139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.596174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.596356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.596389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.596622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.596656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.596897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.596962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.597150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.597184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.597378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.597411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.597672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.597705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.597882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.597916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.598126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.598159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.269 qpair failed and we were unable to recover it. 00:36:42.269 [2024-12-16 02:58:12.598349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.269 [2024-12-16 02:58:12.598381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.598591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.598623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.598807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.598839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.599103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.599137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.599265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.599299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.599499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.599533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.599715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.599747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.599937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.599971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.600177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.600211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.600393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.600427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.600552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.600586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.600713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.600745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.600961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.600994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.601178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.601209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.601418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.601452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.601712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.601745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.601886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.601920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.602102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.602135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.602252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.602284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.602468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.602502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.602673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.602716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.602889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.602963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.603111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.603152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.603274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.603307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.603504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.603536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.603720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.603753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.603964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.603998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.604123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.604156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.604347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.604380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.604510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.604543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.604685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.604719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.604959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.604994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.605172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.605205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.605398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.605430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.605635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.605675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.605765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:42.270 [2024-12-16 02:58:12.605791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:42.270 [2024-12-16 02:58:12.605801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:42.270 [2024-12-16 02:58:12.605808] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:42.270 [2024-12-16 02:58:12.605814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:42.270 [2024-12-16 02:58:12.605809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.605840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.606137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.606171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.270 [2024-12-16 02:58:12.606378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.270 [2024-12-16 02:58:12.606411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.270 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.606595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.606628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.606811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.606844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.607048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.607082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.607298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.607331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.607324] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:42.271 [2024-12-16 02:58:12.607456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.607490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.607422] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:42.271 [2024-12-16 02:58:12.607505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:42.271 [2024-12-16 02:58:12.607506] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:42.271 [2024-12-16 02:58:12.607666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.607697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.607836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.607877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.608077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.608110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.608317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.608349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.608529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.608572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.608766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.608800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.608968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.609003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.609246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.609279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.609471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.609514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.609757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.609790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.610019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.610054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.610255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.610289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.610468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.610502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.610689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.610723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.610830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.610875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.611154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.611188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.611366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.611399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.611574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.611608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.611793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.611826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.611946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.611981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.612224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.612258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.612501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.612536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.612666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.612699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.612874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.612909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.613091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.613125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.613231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.613264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.613405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.613438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.613542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.613576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.613684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.613729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.613974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.614009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.614188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.614222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.614356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.614389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.271 [2024-12-16 02:58:12.614577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.271 [2024-12-16 02:58:12.614610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.271 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.614739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.614773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.614909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.614944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.615188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.615221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.615404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.615437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.615562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.615595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.615872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.615909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.616043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.616078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.616262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.616295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.616434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.616468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.616604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.616639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.616883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.616920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.617164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.617198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.617303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.617337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.617447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.617481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.617720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.617754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.617863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.617898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.618088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.618124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.618319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.618353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.618469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.618502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.618752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.618786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.619077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.619112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.619296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.619330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.619579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.619614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.619813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.619856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.620069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.620104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.620217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.620253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.620386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.620420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.620633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.620668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.620880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.620917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.621103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.621136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.621326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.621361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.621546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.621581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.621865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.621902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.272 qpair failed and we were unable to recover it. 00:36:42.272 [2024-12-16 02:58:12.622024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.272 [2024-12-16 02:58:12.622057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.622239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.622272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.622541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.622584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.622779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.622814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.623057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.623093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.623227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.623262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.623463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.623498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.623778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.623814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.623984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.624040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.624167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.624200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.624385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.624419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.624542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.624576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.624705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.624740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.624943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.624978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.625184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.625219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.625458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.625491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.625623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.625656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.625770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.625803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.625991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.626027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.626214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.626246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.626379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.626411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.626594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.626628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.626805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.626839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.627038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.627073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.627261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.627296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.627433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.627465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.627747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.627782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.627906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.627941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.628120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.628155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.628353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.628389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.628513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.628546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.628756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.628793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.629018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.629054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.629187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.629224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.629383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.629437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.629551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.629588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.629832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.629886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.630077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.630110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.630295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.630328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.273 qpair failed and we were unable to recover it. 00:36:42.273 [2024-12-16 02:58:12.630518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.273 [2024-12-16 02:58:12.630550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.630663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.630694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.630877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.630911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.631162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.631204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.631394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.631427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.631670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.631703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.631888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.631923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.632138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.632170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.632384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.632417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.632546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.632579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.632703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.632736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.632932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.632968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.633262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.633297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.633466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.633500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.633681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.633714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.633981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.634016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.634194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.634229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.634427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.634462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.634637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.634671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.634859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.634897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.635159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.635195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.635393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.635427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.635614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.635652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.635861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.635897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.636013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.636047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.636262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.636297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.636491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.636528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.636721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.636756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.636998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.637035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.637238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.637272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.637434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.637498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.637703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.637744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.637953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.637986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.638239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.638273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.638536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.638568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.638829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.638872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.639059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.639093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.639330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.639362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.639648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.639682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.274 qpair failed and we were unable to recover it. 00:36:42.274 [2024-12-16 02:58:12.639809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.274 [2024-12-16 02:58:12.639843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.639994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.640029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.640204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.640244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.640433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.640466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.640650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.640691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.640879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.640914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.641031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.641065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.641262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.641295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.641561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.641594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.641728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.641760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.641905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.641940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.642052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.642083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.642286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.642318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.642607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.642641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.642885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.642920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.643030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.643062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.643203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.643236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.643426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.643458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.643726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.643758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.643866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.643900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.644033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.644066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.644305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.644338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.644597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.644629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.644810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.644843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.645034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.645068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.645192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.645223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.645411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.645445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.645623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.645655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.645875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.645909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.646096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.646129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.646423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.646456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.646738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.646782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.646931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.646966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.647155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.647188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.647305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.647337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.647514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.647547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.647807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.647841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.648038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.275 [2024-12-16 02:58:12.648071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.275 qpair failed and we were unable to recover it. 00:36:42.275 [2024-12-16 02:58:12.648264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.648298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.648486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.648520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.648712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.648745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.648870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.648904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.649021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.649054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.649232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.649266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.649396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.649430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.649638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.649672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.649934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.649970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.650161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.650194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.650385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.650418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.650636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.650670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.650858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.650893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.651104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.651138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.651399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.651433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.651614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.651647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.651885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.651920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.652199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.652234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.652420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.652454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.652584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.652617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.652878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.652921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.653103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.653139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.653350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.653384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.653553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.653587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.653857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.653894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.654140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.654174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.654425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.654458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.654695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.654730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.655024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.655061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.655323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.655359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.655636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.655671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.655928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.655963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.656206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.656240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.656492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.656528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.656800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.656836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.276 [2024-12-16 02:58:12.657113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.276 [2024-12-16 02:58:12.657151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.276 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.657364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.657398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.657645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.657680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.657810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.657845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.658053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.658088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.658264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.658298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.658523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.658557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.658698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.658732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.658969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.659005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.659126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.659162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.659334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.659371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.659544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.659577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.659768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.659802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.660010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.660048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.660172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.660206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.660399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.660434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.660677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.660713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.660841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.660885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.661150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.661186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.661411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.661446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.661696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.661731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.661998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.662034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.662224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.662257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.662422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.662457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.662664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.662699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.662879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.662917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.663137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.663193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.663425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.663458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.663665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.663699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.663877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.663914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.664098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.664131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.664310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.664344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.664532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.664565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.664826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.664869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.665144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.665179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.665393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.665425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.665709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.665742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.665994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.666029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.277 [2024-12-16 02:58:12.666275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.277 [2024-12-16 02:58:12.666308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.277 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.666549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.666589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.666762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.666795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.666995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.667029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.667288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.667321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.667605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.667637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.667864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.667898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.668094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.668127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.668391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.668423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.668596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.668625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.668801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.668830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.669029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.669060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.669337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.669366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.669630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.669659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.669861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.669892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.670071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.670101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.670283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.670312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.670493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.670523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.670760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.670789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.670913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.670944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.671150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.671180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.671437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.671467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.671734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.671777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.672030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.672078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.672266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.672298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.672424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.672454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.672718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.672747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.673044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.673075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.673383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.673426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.673616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.673648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.673919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.673952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.674217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.674248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.674449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.674480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.674664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.674694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.674935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.674971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.675209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.675242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.675477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.675509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.675770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.675803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.676017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.676051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.278 [2024-12-16 02:58:12.676313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.278 [2024-12-16 02:58:12.676345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.278 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.676581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.676614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.676858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.676898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.677075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.677108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.677284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.677316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.677581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.677613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.677876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.677910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.678081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.678113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.678324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.678357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.678482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.678520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.678777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.678810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.679061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.679095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.679327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.679359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.679554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.679587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.679791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.679823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.679998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.680031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.680217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.680249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.680416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.680450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.680626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.680657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.680819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.680861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.681035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.681066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.681202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.681233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.681438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.681470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.681589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.681623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.681794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.681825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.681969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.682003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.682176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.682208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.682453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.682486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.682659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.682692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.682879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.682932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.683185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.683223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.683369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.683415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.683593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.683637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.683829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.683883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.684174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.684216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.684434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.684473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.684624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.684659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.684927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.684972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.685171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.685216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.279 [2024-12-16 02:58:12.685439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.279 [2024-12-16 02:58:12.685482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.279 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.685684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.685727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.685930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.685966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.686223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.686255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.686396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.686429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.686557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.686588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.686795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.686826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.687059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.687092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.687355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.687388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.687632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.687664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.687843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.687887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.687997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.688028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.688156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.688188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.688371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.688405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.688532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.688563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.688678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.688710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.688950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.688988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.689115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.689146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.689335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.689366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.689543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.689576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.689782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.689815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.690012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.690046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.690293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.690325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.690512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.690543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.690781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.690813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.691072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.691107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.691349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.691382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.691623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.691655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.691767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.691798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.691932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.691964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.692202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.692241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.692498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.692531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.692703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.692735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.692912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.692946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.693147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.693180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.693283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.693315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.693498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.693531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.693715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.693747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.280 qpair failed and we were unable to recover it. 00:36:42.280 [2024-12-16 02:58:12.694002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.280 [2024-12-16 02:58:12.694037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.694222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.694253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.694433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.694466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.694654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.694685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.694950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.694984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.695267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.695299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.695568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.695601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.695722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.695754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.696016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.696050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.696272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.696305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.696541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.696574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.696758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.696790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.697060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.697094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.697224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.697255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.697442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.697474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.697646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.697678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.697866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.697900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.698019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.698052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.698232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.698263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.698512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.698545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.698719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.698751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.698935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.698970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.699147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.699180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.699381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.699414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.699672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.699705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.699941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.699975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.700166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.700199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.700469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.700501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.700781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.700813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.701093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.701126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.701388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.701419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.701619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.701652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.281 [2024-12-16 02:58:12.701834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.281 [2024-12-16 02:58:12.701880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.281 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.702068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.702100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.702308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.702341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.702578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.702610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.702805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.702837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.703102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.703135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.703395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.703426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:42.282 [2024-12-16 02:58:12.703708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.703742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.703931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.703964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:42.282 [2024-12-16 02:58:12.704208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.704241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:42.282 [2024-12-16 02:58:12.704482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.704516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:42.282 [2024-12-16 02:58:12.704757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.704790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.704974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.705009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.705248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.705280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.705480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.705513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.705759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.705793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.705935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.705969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.706213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.706248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.706434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.706466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.706578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.706609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.706795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.706828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.706975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.707009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.707269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.707301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.707543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.707575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.707840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.707886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.708160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.708194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.708470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.708506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.708691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.708725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.708985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.709021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.709290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.709324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.709536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.709569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.709741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.709774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.710011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.710045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.710213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.710245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.710512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.710545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.710809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.710841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.711106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.711140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.282 [2024-12-16 02:58:12.711344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.282 [2024-12-16 02:58:12.711378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.282 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.711664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.711710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.712017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.712056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.712300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.712335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.712690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.712724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.712932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.712967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.713143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.713177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.713367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.713400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.713639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.713672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.713859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.713894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.714089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.714122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.714401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.714434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.714683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.714716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.714972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.715007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.715184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.715219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.715412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.715445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.715701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.715738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.715920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.715955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.716088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.716133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.716304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.716337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.716547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.716580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.716710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.716743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.717013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.717048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.717301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.717333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.717554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.717587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.717723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.717756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.717930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.717965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.718147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.718179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.718427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.718468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.718656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.718689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.718917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.718951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.719147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.719180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.719303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.719336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.719583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.719616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.719831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.719876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.720127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.720161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.720344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.720377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.720603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.720638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.720838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.720885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.721099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.721132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.283 qpair failed and we were unable to recover it. 00:36:42.283 [2024-12-16 02:58:12.721374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.283 [2024-12-16 02:58:12.721407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.721583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.721616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.721826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.721870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.722131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.722164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.722385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.722418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.722608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.722642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.722881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.722916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.723109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.723144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.723264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.723299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.723602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.723635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.723903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.723937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.724221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.724254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.724514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.724549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.724731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.724764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.724964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.724998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.725247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.725281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.725503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.725537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.725750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.725783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.725971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.726005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.726261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.726294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.726590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.726623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.726873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.726908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.727170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.727203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.727332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.727365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.727473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.727503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.727766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.727798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.727946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.727981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.728165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.728198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.728330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.728365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.728633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.728672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.728858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.728893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.729103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.729136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.729364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.729397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.729663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.729698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.729825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.729883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.730093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.730127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.730258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.730291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.730497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.730529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.730765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.730798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.731028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.284 [2024-12-16 02:58:12.731062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.284 qpair failed and we were unable to recover it. 00:36:42.284 [2024-12-16 02:58:12.731273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.731306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.731572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.731604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.731789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.731822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.732063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.732098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.732336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.732369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.732486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.732519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.732776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.732808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.733016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.733050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.733296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.733329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.733603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.733636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.733824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.733867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.734127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.734160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.734397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.734431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.734698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.734730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.734922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.734957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.735099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.735131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.735339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.735377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.735560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.735594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.735798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.735830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.736043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.736076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.736270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.736303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.736586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.736619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.736793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.736825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.737009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.737042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.737182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.737213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.737412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.737445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.737727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.737760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.737934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.737968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.738108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.738142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.738323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.738356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.738477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.738511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.738767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.738800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.739118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.739153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.739344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.739376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:42.285 [2024-12-16 02:58:12.739643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.285 [2024-12-16 02:58:12.739679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.285 qpair failed and we were unable to recover it. 00:36:42.285 [2024-12-16 02:58:12.739867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.739901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:42.286 [2024-12-16 02:58:12.740142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.740176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.286 [2024-12-16 02:58:12.740466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.740500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:42.286 [2024-12-16 02:58:12.740691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.740725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.740973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.741008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.741193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.741225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.741362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.741395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.741698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.741732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.741943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.741978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.742159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.742192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.742383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.742415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.742634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.742667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.742865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.742899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.743026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.743059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.743267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.743299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.743521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.743553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.743724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.743758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.743992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.744027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.744219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.744251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.744469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.744502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.744766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.744804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.745017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.745052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.745245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.745277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.745402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.745435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.745693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.745726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.745944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.745978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.746171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.746203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.746331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.746364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.746600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.746632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.746855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.746888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.747090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.747123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.747258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.747290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.747537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.747569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.747831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.747871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.748017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.748050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.748233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.748265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.748452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.748486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.748725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.286 [2024-12-16 02:58:12.748758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.286 qpair failed and we were unable to recover it. 00:36:42.286 [2024-12-16 02:58:12.748946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.748981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.749170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.749203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.749307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.749340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.749606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.749639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.749918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.749953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.750086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.750120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.750308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.750340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.750636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.750669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.750861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.750895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.751086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.751124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.751252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.751285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.751502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.751535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.751802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.751836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.751986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.752020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.752196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.752229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.752422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.752455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.752693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.752725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.752905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.752940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.753179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.753212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.753423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.753456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.753584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.753616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.753878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.753913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.754107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.754139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.754274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.754314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.754541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.754575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.754765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.754797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.754954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.754988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.755191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.755224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.755411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.755443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.755632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.755664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.755904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.755938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.756074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.756106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.756361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.756393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.756594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.756627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.756884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.756917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.757103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.757135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.757373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.757412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.757622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.757655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.757916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.287 [2024-12-16 02:58:12.757950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.287 qpair failed and we were unable to recover it. 00:36:42.287 [2024-12-16 02:58:12.758139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.758172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.758360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.758393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.758680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.758712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.758835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.758876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.759090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.759123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.759338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.759370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.759655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.759687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.759901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.759935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.760176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.760209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.760472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.760504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.760786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.760818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.761096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.761131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.761273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.761305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.761572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.761605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.761785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.761818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.761965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.762017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.762267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.762302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.762442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.762476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.762663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.762697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.762823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.762869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.763110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.763144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.763417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.763450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.763734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.763767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.763956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.763990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2738000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.764212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.764248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.764505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.764538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.764749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.764783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.765028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.765062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.765253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.765286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.765466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.765499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.765677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.765709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.765901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.765934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.766198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.766231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.766513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.766546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.766787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.766820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.767092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.767125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.767364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.767397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.767581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.767620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.767830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.288 [2024-12-16 02:58:12.767889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.288 qpair failed and we were unable to recover it. 00:36:42.288 [2024-12-16 02:58:12.768080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.768112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.768304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.768338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.768454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.768487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.768746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.768779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.769065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.769101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.769387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.769420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.769684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.769718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.769988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.770022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.770264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.770297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.770469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.770503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.770742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.770775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.770907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.770941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.771211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.771245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.771511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.771545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.771835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.771882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.772082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.772116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.772355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.772388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.772625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.772657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.772842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.772887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.773069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.773102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.773362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.773395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.773635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.773668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.773953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.773989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.774255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.774287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.774529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.774561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f273c000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.774781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.774833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.775098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.775134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 Malloc0 00:36:42.289 [2024-12-16 02:58:12.775393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.775426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.775713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.775746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.776021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.289 [2024-12-16 02:58:12.776056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.776264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.776297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:42.289 [2024-12-16 02:58:12.776558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.776591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.776759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.289 [2024-12-16 02:58:12.776793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.777075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:42.289 [2024-12-16 02:58:12.777110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.777291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.777325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.777528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.777562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.777823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.777867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.289 [2024-12-16 02:58:12.778149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.289 [2024-12-16 02:58:12.778183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.289 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.778468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.778502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.778717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.778749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.778958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.778992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.779225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.779258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.779525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.779558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.779748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.779781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.779966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.780002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.780262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.780296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.780584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.780618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.780886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.780920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.781095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.781128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.781398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.781431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.781700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.781735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.781933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.781967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.782214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.782247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.782435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.782467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.782654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.782688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.782883] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:42.290 [2024-12-16 02:58:12.782938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.782970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.783145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.783178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.783419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.783453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.783627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.783661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.783948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.783982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.784234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.784267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.784535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.784569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.784703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.784736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2744000b90 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.785047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.785092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.785301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.785335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.785596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.785629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.785835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.785877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.786122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.786156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.786348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.786382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.786573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.786606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.786809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.786842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.787113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.290 [2024-12-16 02:58:12.787148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.290 qpair failed and we were unable to recover it. 00:36:42.290 [2024-12-16 02:58:12.787283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.787315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.787552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.787585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.787764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.787797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.788093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.788128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.291 [2024-12-16 02:58:12.788415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.788456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.788713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:42.291 [2024-12-16 02:58:12.788746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.788875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.788910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.291 [2024-12-16 02:58:12.789133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.789167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:42.291 [2024-12-16 02:58:12.789450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.789484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.789667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.789700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.789901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.789935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.790112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.790145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.790418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.790450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.790640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.790673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.790864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.790898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.791024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.791058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.791237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.791281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.791411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.791443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.791678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.791712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.791886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.791920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.792129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.792161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.792420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.792452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.792630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.792663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.792926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.792961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.793225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.793258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.793440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.793472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.793656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.793688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.793951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.793985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.794136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.794169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.794432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.794464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.794659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.794694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.794934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.794969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.795228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.795260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.795508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.795541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.795832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.795876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.796136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.796169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.291 [2024-12-16 02:58:12.796396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.291 [2024-12-16 02:58:12.796430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.291 qpair failed and we were unable to recover it. 00:36:42.291 [2024-12-16 02:58:12.796685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.796718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:42.292 addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.796844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.796889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.292 [2024-12-16 02:58:12.797149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.797182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:42.292 [2024-12-16 02:58:12.797370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.797404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.797643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.797676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.797990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.798025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.798202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.798234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.798471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.798504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.798687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.798720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.798903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.798937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.799175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.799209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.799322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.799354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.799552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.799585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.799822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.799874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.800074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.800107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.800358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.800391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.800566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.800598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.800802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.800835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.801112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.801153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.801420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.801453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.801626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.801659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.801798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.801830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.802025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.802059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.802319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.802352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.802540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.802572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.802835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.802879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.803078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.803112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.803349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.803382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.803617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.803650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.803900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.803936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.804133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.804165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.804383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.292 [2024-12-16 02:58:12.804418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.804654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.804687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:42.292 [2024-12-16 02:58:12.804953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.804988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.292 [2024-12-16 02:58:12.805268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.805301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:42.292 [2024-12-16 02:58:12.805489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.805524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.292 [2024-12-16 02:58:12.805704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.292 [2024-12-16 02:58:12.805737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.292 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.805998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.293 [2024-12-16 02:58:12.806032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.806210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.293 [2024-12-16 02:58:12.806243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.806483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.293 [2024-12-16 02:58:12.806516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.806698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.293 [2024-12-16 02:58:12.806731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.806936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.293 [2024-12-16 02:58:12.806971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.807142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.293 [2024-12-16 02:58:12.807175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.807308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.293 [2024-12-16 02:58:12.807341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.807628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.293 [2024-12-16 02:58:12.807661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.807832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:42.293 [2024-12-16 02:58:12.807877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9fcd0 with addr=10.0.0.2, port=4420 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.807986] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:42.293 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.293 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:42.293 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:42.293 [2024-12-16 02:58:12.813548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.293 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:42.293 [2024-12-16 02:58:12.813665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.293 [2024-12-16 02:58:12.813708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.293 [2024-12-16 02:58:12.813731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.293 [2024-12-16 02:58:12.813751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.293 [2024-12-16 02:58:12.813802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:42.293 02:58:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1210106 00:36:42.293 [2024-12-16 02:58:12.823479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.293 [2024-12-16 02:58:12.823560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.293 [2024-12-16 02:58:12.823589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.293 [2024-12-16 02:58:12.823603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.293 [2024-12-16 02:58:12.823617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.293 [2024-12-16 02:58:12.823648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.833456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.293 [2024-12-16 02:58:12.833517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.293 [2024-12-16 02:58:12.833537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.293 [2024-12-16 02:58:12.833551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.293 [2024-12-16 02:58:12.833561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.293 [2024-12-16 02:58:12.833581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.843490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.293 [2024-12-16 02:58:12.843566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.293 [2024-12-16 02:58:12.843580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.293 [2024-12-16 02:58:12.843587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.293 [2024-12-16 02:58:12.843594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.293 [2024-12-16 02:58:12.843609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.853440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.293 [2024-12-16 02:58:12.853529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.293 [2024-12-16 02:58:12.853543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.293 [2024-12-16 02:58:12.853550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.293 [2024-12-16 02:58:12.853556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.293 [2024-12-16 02:58:12.853570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.863439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.293 [2024-12-16 02:58:12.863495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.293 [2024-12-16 02:58:12.863508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.293 [2024-12-16 02:58:12.863516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.293 [2024-12-16 02:58:12.863522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.293 [2024-12-16 02:58:12.863537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.873477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.293 [2024-12-16 02:58:12.873530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.293 [2024-12-16 02:58:12.873544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.293 [2024-12-16 02:58:12.873551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.293 [2024-12-16 02:58:12.873557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.293 [2024-12-16 02:58:12.873571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.883443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.293 [2024-12-16 02:58:12.883498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.293 [2024-12-16 02:58:12.883512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.293 [2024-12-16 02:58:12.883520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.293 [2024-12-16 02:58:12.883526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.293 [2024-12-16 02:58:12.883540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.893594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.293 [2024-12-16 02:58:12.893648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.293 [2024-12-16 02:58:12.893662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.293 [2024-12-16 02:58:12.893669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.293 [2024-12-16 02:58:12.893675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.293 [2024-12-16 02:58:12.893690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.293 qpair failed and we were unable to recover it. 00:36:42.293 [2024-12-16 02:58:12.903603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.293 [2024-12-16 02:58:12.903660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.293 [2024-12-16 02:58:12.903674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.294 [2024-12-16 02:58:12.903681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.294 [2024-12-16 02:58:12.903687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.294 [2024-12-16 02:58:12.903702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.294 qpair failed and we were unable to recover it. 00:36:42.554 [2024-12-16 02:58:12.913629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.554 [2024-12-16 02:58:12.913685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.554 [2024-12-16 02:58:12.913699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.554 [2024-12-16 02:58:12.913707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.554 [2024-12-16 02:58:12.913714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.554 [2024-12-16 02:58:12.913729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.554 qpair failed and we were unable to recover it. 00:36:42.554 [2024-12-16 02:58:12.923639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.554 [2024-12-16 02:58:12.923697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.554 [2024-12-16 02:58:12.923715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.554 [2024-12-16 02:58:12.923723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.554 [2024-12-16 02:58:12.923729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.554 [2024-12-16 02:58:12.923743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.554 qpair failed and we were unable to recover it. 00:36:42.554 [2024-12-16 02:58:12.933666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.554 [2024-12-16 02:58:12.933720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.554 [2024-12-16 02:58:12.933734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.554 [2024-12-16 02:58:12.933741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.554 [2024-12-16 02:58:12.933748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.554 [2024-12-16 02:58:12.933764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.554 qpair failed and we were unable to recover it. 00:36:42.554 [2024-12-16 02:58:12.943687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.554 [2024-12-16 02:58:12.943739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.554 [2024-12-16 02:58:12.943753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.554 [2024-12-16 02:58:12.943760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.554 [2024-12-16 02:58:12.943766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:12.943781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:12.953746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.555 [2024-12-16 02:58:12.953804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.555 [2024-12-16 02:58:12.953818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.555 [2024-12-16 02:58:12.953825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.555 [2024-12-16 02:58:12.953831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:12.953854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:12.963752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.555 [2024-12-16 02:58:12.963808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.555 [2024-12-16 02:58:12.963821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.555 [2024-12-16 02:58:12.963828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.555 [2024-12-16 02:58:12.963837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:12.963857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:12.973825] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.555 [2024-12-16 02:58:12.973892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.555 [2024-12-16 02:58:12.973906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.555 [2024-12-16 02:58:12.973913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.555 [2024-12-16 02:58:12.973919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:12.973933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:12.983819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.555 [2024-12-16 02:58:12.983880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.555 [2024-12-16 02:58:12.983893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.555 [2024-12-16 02:58:12.983900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.555 [2024-12-16 02:58:12.983907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:12.983921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:12.993853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.555 [2024-12-16 02:58:12.993909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.555 [2024-12-16 02:58:12.993922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.555 [2024-12-16 02:58:12.993929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.555 [2024-12-16 02:58:12.993935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:12.993950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:13.003874] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.555 [2024-12-16 02:58:13.003933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.555 [2024-12-16 02:58:13.003947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.555 [2024-12-16 02:58:13.003954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.555 [2024-12-16 02:58:13.003961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:13.003976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:13.013899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.555 [2024-12-16 02:58:13.013953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.555 [2024-12-16 02:58:13.013967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.555 [2024-12-16 02:58:13.013973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.555 [2024-12-16 02:58:13.013979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:13.013994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:13.023862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.555 [2024-12-16 02:58:13.023915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.555 [2024-12-16 02:58:13.023929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.555 [2024-12-16 02:58:13.023936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.555 [2024-12-16 02:58:13.023942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:13.023957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:13.033880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.555 [2024-12-16 02:58:13.033937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.555 [2024-12-16 02:58:13.033950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.555 [2024-12-16 02:58:13.033957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.555 [2024-12-16 02:58:13.033963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:13.033978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:13.043994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.555 [2024-12-16 02:58:13.044054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.555 [2024-12-16 02:58:13.044068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.555 [2024-12-16 02:58:13.044074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.555 [2024-12-16 02:58:13.044081] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:13.044095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:13.054021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.555 [2024-12-16 02:58:13.054074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.555 [2024-12-16 02:58:13.054092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.555 [2024-12-16 02:58:13.054100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.555 [2024-12-16 02:58:13.054107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:13.054122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:13.064035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.555 [2024-12-16 02:58:13.064094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.555 [2024-12-16 02:58:13.064108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.555 [2024-12-16 02:58:13.064115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.555 [2024-12-16 02:58:13.064121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:13.064135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:13.074066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.555 [2024-12-16 02:58:13.074118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.555 [2024-12-16 02:58:13.074132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.555 [2024-12-16 02:58:13.074139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.555 [2024-12-16 02:58:13.074145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.555 [2024-12-16 02:58:13.074159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.555 qpair failed and we were unable to recover it. 00:36:42.555 [2024-12-16 02:58:13.084094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.556 [2024-12-16 02:58:13.084149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.556 [2024-12-16 02:58:13.084163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.556 [2024-12-16 02:58:13.084170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.556 [2024-12-16 02:58:13.084176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.556 [2024-12-16 02:58:13.084191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.556 qpair failed and we were unable to recover it. 00:36:42.556 [2024-12-16 02:58:13.094160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.556 [2024-12-16 02:58:13.094215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.556 [2024-12-16 02:58:13.094228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.556 [2024-12-16 02:58:13.094235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.556 [2024-12-16 02:58:13.094245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.556 [2024-12-16 02:58:13.094259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.556 qpair failed and we were unable to recover it. 00:36:42.556 [2024-12-16 02:58:13.104151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.556 [2024-12-16 02:58:13.104207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.556 [2024-12-16 02:58:13.104221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.556 [2024-12-16 02:58:13.104228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.556 [2024-12-16 02:58:13.104234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.556 [2024-12-16 02:58:13.104248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.556 qpair failed and we were unable to recover it. 00:36:42.556 [2024-12-16 02:58:13.114222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.556 [2024-12-16 02:58:13.114283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.556 [2024-12-16 02:58:13.114297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.556 [2024-12-16 02:58:13.114304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.556 [2024-12-16 02:58:13.114310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.556 [2024-12-16 02:58:13.114324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.556 qpair failed and we were unable to recover it. 00:36:42.556 [2024-12-16 02:58:13.124214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.556 [2024-12-16 02:58:13.124268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.556 [2024-12-16 02:58:13.124282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.556 [2024-12-16 02:58:13.124288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.556 [2024-12-16 02:58:13.124296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.556 [2024-12-16 02:58:13.124311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.556 qpair failed and we were unable to recover it. 00:36:42.556 [2024-12-16 02:58:13.134241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.556 [2024-12-16 02:58:13.134293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.556 [2024-12-16 02:58:13.134307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.556 [2024-12-16 02:58:13.134314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.556 [2024-12-16 02:58:13.134320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.556 [2024-12-16 02:58:13.134335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.556 qpair failed and we were unable to recover it. 00:36:42.556 [2024-12-16 02:58:13.144264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.556 [2024-12-16 02:58:13.144320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.556 [2024-12-16 02:58:13.144333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.556 [2024-12-16 02:58:13.144339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.556 [2024-12-16 02:58:13.144346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.556 [2024-12-16 02:58:13.144360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.556 qpair failed and we were unable to recover it. 00:36:42.556 [2024-12-16 02:58:13.154269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.556 [2024-12-16 02:58:13.154330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.556 [2024-12-16 02:58:13.154344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.556 [2024-12-16 02:58:13.154351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.556 [2024-12-16 02:58:13.154357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.556 [2024-12-16 02:58:13.154371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.556 qpair failed and we were unable to recover it. 00:36:42.556 [2024-12-16 02:58:13.164331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.556 [2024-12-16 02:58:13.164385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.556 [2024-12-16 02:58:13.164399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.556 [2024-12-16 02:58:13.164406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.556 [2024-12-16 02:58:13.164412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.556 [2024-12-16 02:58:13.164427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.556 qpair failed and we were unable to recover it. 00:36:42.556 [2024-12-16 02:58:13.174348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.556 [2024-12-16 02:58:13.174402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.556 [2024-12-16 02:58:13.174416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.556 [2024-12-16 02:58:13.174422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.556 [2024-12-16 02:58:13.174429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.556 [2024-12-16 02:58:13.174443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.556 qpair failed and we were unable to recover it. 00:36:42.556 [2024-12-16 02:58:13.184306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.556 [2024-12-16 02:58:13.184358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.556 [2024-12-16 02:58:13.184375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.556 [2024-12-16 02:58:13.184382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.556 [2024-12-16 02:58:13.184388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.556 [2024-12-16 02:58:13.184403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.556 qpair failed and we were unable to recover it. 00:36:42.556 [2024-12-16 02:58:13.194410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.556 [2024-12-16 02:58:13.194463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.556 [2024-12-16 02:58:13.194476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.556 [2024-12-16 02:58:13.194483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.556 [2024-12-16 02:58:13.194489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.556 [2024-12-16 02:58:13.194503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.556 qpair failed and we were unable to recover it. 00:36:42.556 [2024-12-16 02:58:13.204441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.556 [2024-12-16 02:58:13.204497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.556 [2024-12-16 02:58:13.204510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.556 [2024-12-16 02:58:13.204517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.556 [2024-12-16 02:58:13.204523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.556 [2024-12-16 02:58:13.204538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.556 qpair failed and we were unable to recover it. 00:36:42.817 [2024-12-16 02:58:13.214481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.817 [2024-12-16 02:58:13.214540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.817 [2024-12-16 02:58:13.214553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.817 [2024-12-16 02:58:13.214561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.817 [2024-12-16 02:58:13.214568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.817 [2024-12-16 02:58:13.214583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.817 qpair failed and we were unable to recover it. 00:36:42.817 [2024-12-16 02:58:13.224498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.817 [2024-12-16 02:58:13.224550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.817 [2024-12-16 02:58:13.224564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.817 [2024-12-16 02:58:13.224571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.817 [2024-12-16 02:58:13.224581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.817 [2024-12-16 02:58:13.224595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.817 qpair failed and we were unable to recover it. 00:36:42.817 [2024-12-16 02:58:13.234524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.817 [2024-12-16 02:58:13.234580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.817 [2024-12-16 02:58:13.234594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.817 [2024-12-16 02:58:13.234601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.817 [2024-12-16 02:58:13.234608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.817 [2024-12-16 02:58:13.234622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.817 qpair failed and we were unable to recover it. 00:36:42.817 [2024-12-16 02:58:13.244547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.817 [2024-12-16 02:58:13.244604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.817 [2024-12-16 02:58:13.244617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.817 [2024-12-16 02:58:13.244624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.817 [2024-12-16 02:58:13.244630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.817 [2024-12-16 02:58:13.244644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.817 qpair failed and we were unable to recover it. 00:36:42.817 [2024-12-16 02:58:13.254580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.817 [2024-12-16 02:58:13.254635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.817 [2024-12-16 02:58:13.254651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.817 [2024-12-16 02:58:13.254657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.817 [2024-12-16 02:58:13.254664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.817 [2024-12-16 02:58:13.254680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.817 qpair failed and we were unable to recover it. 00:36:42.817 [2024-12-16 02:58:13.264591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.817 [2024-12-16 02:58:13.264645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.817 [2024-12-16 02:58:13.264658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.817 [2024-12-16 02:58:13.264665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.817 [2024-12-16 02:58:13.264672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.817 [2024-12-16 02:58:13.264686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.817 qpair failed and we were unable to recover it. 00:36:42.817 [2024-12-16 02:58:13.274594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.817 [2024-12-16 02:58:13.274646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.817 [2024-12-16 02:58:13.274661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.817 [2024-12-16 02:58:13.274668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.817 [2024-12-16 02:58:13.274674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.817 [2024-12-16 02:58:13.274688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.817 qpair failed and we were unable to recover it. 00:36:42.817 [2024-12-16 02:58:13.284653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.817 [2024-12-16 02:58:13.284711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.817 [2024-12-16 02:58:13.284725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.817 [2024-12-16 02:58:13.284731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.817 [2024-12-16 02:58:13.284738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.817 [2024-12-16 02:58:13.284752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.817 qpair failed and we were unable to recover it. 00:36:42.817 [2024-12-16 02:58:13.294711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.817 [2024-12-16 02:58:13.294767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.817 [2024-12-16 02:58:13.294782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.817 [2024-12-16 02:58:13.294789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.817 [2024-12-16 02:58:13.294795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.817 [2024-12-16 02:58:13.294809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.817 qpair failed and we were unable to recover it. 00:36:42.817 [2024-12-16 02:58:13.304715] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.817 [2024-12-16 02:58:13.304768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.817 [2024-12-16 02:58:13.304782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.817 [2024-12-16 02:58:13.304788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.817 [2024-12-16 02:58:13.304795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.817 [2024-12-16 02:58:13.304809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.817 qpair failed and we were unable to recover it. 00:36:42.817 [2024-12-16 02:58:13.314721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.817 [2024-12-16 02:58:13.314773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.817 [2024-12-16 02:58:13.314790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.817 [2024-12-16 02:58:13.314797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.817 [2024-12-16 02:58:13.314804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.817 [2024-12-16 02:58:13.314818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.817 qpair failed and we were unable to recover it. 00:36:42.817 [2024-12-16 02:58:13.324703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.817 [2024-12-16 02:58:13.324762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.817 [2024-12-16 02:58:13.324776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.817 [2024-12-16 02:58:13.324783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.817 [2024-12-16 02:58:13.324790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.324805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.818 [2024-12-16 02:58:13.334809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.818 [2024-12-16 02:58:13.334871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.818 [2024-12-16 02:58:13.334886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.818 [2024-12-16 02:58:13.334894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.818 [2024-12-16 02:58:13.334901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.334915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.818 [2024-12-16 02:58:13.344800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.818 [2024-12-16 02:58:13.344857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.818 [2024-12-16 02:58:13.344871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.818 [2024-12-16 02:58:13.344879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.818 [2024-12-16 02:58:13.344885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.344899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.818 [2024-12-16 02:58:13.354870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.818 [2024-12-16 02:58:13.354930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.818 [2024-12-16 02:58:13.354944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.818 [2024-12-16 02:58:13.354951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.818 [2024-12-16 02:58:13.354960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.354976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.818 [2024-12-16 02:58:13.364884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.818 [2024-12-16 02:58:13.364943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.818 [2024-12-16 02:58:13.364957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.818 [2024-12-16 02:58:13.364963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.818 [2024-12-16 02:58:13.364970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.364984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.818 [2024-12-16 02:58:13.374888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.818 [2024-12-16 02:58:13.374965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.818 [2024-12-16 02:58:13.374981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.818 [2024-12-16 02:58:13.374988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.818 [2024-12-16 02:58:13.374994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.375009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.818 [2024-12-16 02:58:13.384853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.818 [2024-12-16 02:58:13.384918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.818 [2024-12-16 02:58:13.384933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.818 [2024-12-16 02:58:13.384940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.818 [2024-12-16 02:58:13.384946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.384961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.818 [2024-12-16 02:58:13.394943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.818 [2024-12-16 02:58:13.395047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.818 [2024-12-16 02:58:13.395063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.818 [2024-12-16 02:58:13.395070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.818 [2024-12-16 02:58:13.395077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.395091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.818 [2024-12-16 02:58:13.404946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.818 [2024-12-16 02:58:13.405005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.818 [2024-12-16 02:58:13.405019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.818 [2024-12-16 02:58:13.405025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.818 [2024-12-16 02:58:13.405032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.405047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.818 [2024-12-16 02:58:13.414944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.818 [2024-12-16 02:58:13.415001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.818 [2024-12-16 02:58:13.415015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.818 [2024-12-16 02:58:13.415022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.818 [2024-12-16 02:58:13.415029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.415043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.818 [2024-12-16 02:58:13.425058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.818 [2024-12-16 02:58:13.425111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.818 [2024-12-16 02:58:13.425125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.818 [2024-12-16 02:58:13.425132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.818 [2024-12-16 02:58:13.425138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.425152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.818 [2024-12-16 02:58:13.435054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.818 [2024-12-16 02:58:13.435112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.818 [2024-12-16 02:58:13.435126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.818 [2024-12-16 02:58:13.435133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.818 [2024-12-16 02:58:13.435139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.435154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.818 [2024-12-16 02:58:13.445110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.818 [2024-12-16 02:58:13.445165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.818 [2024-12-16 02:58:13.445181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.818 [2024-12-16 02:58:13.445188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.818 [2024-12-16 02:58:13.445195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.445209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.818 [2024-12-16 02:58:13.455104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.818 [2024-12-16 02:58:13.455163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.818 [2024-12-16 02:58:13.455177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.818 [2024-12-16 02:58:13.455183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.818 [2024-12-16 02:58:13.455189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.818 [2024-12-16 02:58:13.455204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.818 qpair failed and we were unable to recover it. 00:36:42.819 [2024-12-16 02:58:13.465190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:42.819 [2024-12-16 02:58:13.465264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:42.819 [2024-12-16 02:58:13.465277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:42.819 [2024-12-16 02:58:13.465284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:42.819 [2024-12-16 02:58:13.465290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:42.819 [2024-12-16 02:58:13.465304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:42.819 qpair failed and we were unable to recover it. 00:36:43.079 [2024-12-16 02:58:13.475125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.079 [2024-12-16 02:58:13.475182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.079 [2024-12-16 02:58:13.475196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.079 [2024-12-16 02:58:13.475204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.079 [2024-12-16 02:58:13.475210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.079 [2024-12-16 02:58:13.475224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.079 qpair failed and we were unable to recover it. 00:36:43.079 [2024-12-16 02:58:13.485138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.079 [2024-12-16 02:58:13.485231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.079 [2024-12-16 02:58:13.485245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.079 [2024-12-16 02:58:13.485252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.079 [2024-12-16 02:58:13.485262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.079 [2024-12-16 02:58:13.485277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.079 qpair failed and we were unable to recover it. 00:36:43.079 [2024-12-16 02:58:13.495248] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.079 [2024-12-16 02:58:13.495300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.079 [2024-12-16 02:58:13.495313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.079 [2024-12-16 02:58:13.495320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.079 [2024-12-16 02:58:13.495326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.079 [2024-12-16 02:58:13.495341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.079 qpair failed and we were unable to recover it. 00:36:43.079 [2024-12-16 02:58:13.505187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.079 [2024-12-16 02:58:13.505259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.079 [2024-12-16 02:58:13.505274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.079 [2024-12-16 02:58:13.505281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.079 [2024-12-16 02:58:13.505288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.079 [2024-12-16 02:58:13.505303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.079 qpair failed and we were unable to recover it. 00:36:43.079 [2024-12-16 02:58:13.515291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.079 [2024-12-16 02:58:13.515366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.079 [2024-12-16 02:58:13.515380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.079 [2024-12-16 02:58:13.515387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.079 [2024-12-16 02:58:13.515393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.079 [2024-12-16 02:58:13.515408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.079 qpair failed and we were unable to recover it. 00:36:43.079 [2024-12-16 02:58:13.525292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.079 [2024-12-16 02:58:13.525351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.079 [2024-12-16 02:58:13.525368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.079 [2024-12-16 02:58:13.525376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.079 [2024-12-16 02:58:13.525382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.079 [2024-12-16 02:58:13.525398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.079 qpair failed and we were unable to recover it. 00:36:43.079 [2024-12-16 02:58:13.535381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.079 [2024-12-16 02:58:13.535461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.079 [2024-12-16 02:58:13.535476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.079 [2024-12-16 02:58:13.535483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.079 [2024-12-16 02:58:13.535490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.079 [2024-12-16 02:58:13.535505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.079 qpair failed and we were unable to recover it. 00:36:43.079 [2024-12-16 02:58:13.545360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.079 [2024-12-16 02:58:13.545413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.079 [2024-12-16 02:58:13.545427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.079 [2024-12-16 02:58:13.545433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.079 [2024-12-16 02:58:13.545440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.079 [2024-12-16 02:58:13.545455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.079 qpair failed and we were unable to recover it. 00:36:43.079 [2024-12-16 02:58:13.555415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.079 [2024-12-16 02:58:13.555479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.079 [2024-12-16 02:58:13.555493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.079 [2024-12-16 02:58:13.555501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.079 [2024-12-16 02:58:13.555507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.079 [2024-12-16 02:58:13.555522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.079 qpair failed and we were unable to recover it. 00:36:43.079 [2024-12-16 02:58:13.565423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.079 [2024-12-16 02:58:13.565476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.079 [2024-12-16 02:58:13.565490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.079 [2024-12-16 02:58:13.565497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.079 [2024-12-16 02:58:13.565504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.079 [2024-12-16 02:58:13.565518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.079 qpair failed and we were unable to recover it. 00:36:43.079 [2024-12-16 02:58:13.575501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.079 [2024-12-16 02:58:13.575558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.079 [2024-12-16 02:58:13.575576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.079 [2024-12-16 02:58:13.575583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.079 [2024-12-16 02:58:13.575590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.079 [2024-12-16 02:58:13.575604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.079 qpair failed and we were unable to recover it. 00:36:43.079 [2024-12-16 02:58:13.585438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.079 [2024-12-16 02:58:13.585499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.079 [2024-12-16 02:58:13.585513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.585520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.585527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.585541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.080 qpair failed and we were unable to recover it. 00:36:43.080 [2024-12-16 02:58:13.595473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.080 [2024-12-16 02:58:13.595520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.080 [2024-12-16 02:58:13.595533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.595541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.595546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.595561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.080 qpair failed and we were unable to recover it. 00:36:43.080 [2024-12-16 02:58:13.605508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.080 [2024-12-16 02:58:13.605565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.080 [2024-12-16 02:58:13.605579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.605585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.605591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.605606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.080 qpair failed and we were unable to recover it. 00:36:43.080 [2024-12-16 02:58:13.615727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.080 [2024-12-16 02:58:13.615793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.080 [2024-12-16 02:58:13.615806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.615813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.615823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.615838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.080 qpair failed and we were unable to recover it. 00:36:43.080 [2024-12-16 02:58:13.625731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.080 [2024-12-16 02:58:13.625831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.080 [2024-12-16 02:58:13.625849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.625856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.625863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.625877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.080 qpair failed and we were unable to recover it. 00:36:43.080 [2024-12-16 02:58:13.635707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.080 [2024-12-16 02:58:13.635773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.080 [2024-12-16 02:58:13.635786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.635793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.635799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.635814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.080 qpair failed and we were unable to recover it. 00:36:43.080 [2024-12-16 02:58:13.645731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.080 [2024-12-16 02:58:13.645807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.080 [2024-12-16 02:58:13.645821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.645828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.645834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.645853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.080 qpair failed and we were unable to recover it. 00:36:43.080 [2024-12-16 02:58:13.655730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.080 [2024-12-16 02:58:13.655782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.080 [2024-12-16 02:58:13.655796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.655803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.655809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.655823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.080 qpair failed and we were unable to recover it. 00:36:43.080 [2024-12-16 02:58:13.665657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.080 [2024-12-16 02:58:13.665708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.080 [2024-12-16 02:58:13.665721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.665728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.665734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.665749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.080 qpair failed and we were unable to recover it. 00:36:43.080 [2024-12-16 02:58:13.675769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.080 [2024-12-16 02:58:13.675821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.080 [2024-12-16 02:58:13.675835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.675842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.675852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.675867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.080 qpair failed and we were unable to recover it. 00:36:43.080 [2024-12-16 02:58:13.685801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.080 [2024-12-16 02:58:13.685858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.080 [2024-12-16 02:58:13.685873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.685881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.685887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.685902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.080 qpair failed and we were unable to recover it. 00:36:43.080 [2024-12-16 02:58:13.695805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.080 [2024-12-16 02:58:13.695875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.080 [2024-12-16 02:58:13.695888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.695896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.695902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.695916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.080 qpair failed and we were unable to recover it. 00:36:43.080 [2024-12-16 02:58:13.705789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.080 [2024-12-16 02:58:13.705842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.080 [2024-12-16 02:58:13.705863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.705870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.705877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.705891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.080 qpair failed and we were unable to recover it. 00:36:43.080 [2024-12-16 02:58:13.715887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.080 [2024-12-16 02:58:13.715944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.080 [2024-12-16 02:58:13.715957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.080 [2024-12-16 02:58:13.715965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.080 [2024-12-16 02:58:13.715971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.080 [2024-12-16 02:58:13.715985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.081 qpair failed and we were unable to recover it. 00:36:43.081 [2024-12-16 02:58:13.725973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.081 [2024-12-16 02:58:13.726078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.081 [2024-12-16 02:58:13.726092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.081 [2024-12-16 02:58:13.726099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.081 [2024-12-16 02:58:13.726105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.081 [2024-12-16 02:58:13.726120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.081 qpair failed and we were unable to recover it. 00:36:43.081 [2024-12-16 02:58:13.735875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.081 [2024-12-16 02:58:13.735926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.081 [2024-12-16 02:58:13.735940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.081 [2024-12-16 02:58:13.735947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.081 [2024-12-16 02:58:13.735954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.081 [2024-12-16 02:58:13.735970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.081 qpair failed and we were unable to recover it. 00:36:43.341 [2024-12-16 02:58:13.745984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.341 [2024-12-16 02:58:13.746036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.341 [2024-12-16 02:58:13.746050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.341 [2024-12-16 02:58:13.746057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.341 [2024-12-16 02:58:13.746067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.341 [2024-12-16 02:58:13.746082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.341 qpair failed and we were unable to recover it. 00:36:43.341 [2024-12-16 02:58:13.755926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.341 [2024-12-16 02:58:13.755977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.341 [2024-12-16 02:58:13.755990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.341 [2024-12-16 02:58:13.755997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.341 [2024-12-16 02:58:13.756003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.341 [2024-12-16 02:58:13.756018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.341 qpair failed and we were unable to recover it. 00:36:43.341 [2024-12-16 02:58:13.766023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.341 [2024-12-16 02:58:13.766078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.341 [2024-12-16 02:58:13.766092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.341 [2024-12-16 02:58:13.766098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.341 [2024-12-16 02:58:13.766105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.341 [2024-12-16 02:58:13.766120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.341 qpair failed and we were unable to recover it. 00:36:43.341 [2024-12-16 02:58:13.776080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.341 [2024-12-16 02:58:13.776136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.341 [2024-12-16 02:58:13.776150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.341 [2024-12-16 02:58:13.776156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.341 [2024-12-16 02:58:13.776162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.341 [2024-12-16 02:58:13.776177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.341 qpair failed and we were unable to recover it. 00:36:43.341 [2024-12-16 02:58:13.786082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.341 [2024-12-16 02:58:13.786135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.341 [2024-12-16 02:58:13.786149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.341 [2024-12-16 02:58:13.786155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.341 [2024-12-16 02:58:13.786161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.341 [2024-12-16 02:58:13.786176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.341 qpair failed and we were unable to recover it. 00:36:43.341 [2024-12-16 02:58:13.796096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.341 [2024-12-16 02:58:13.796160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.341 [2024-12-16 02:58:13.796173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.341 [2024-12-16 02:58:13.796181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.341 [2024-12-16 02:58:13.796187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.341 [2024-12-16 02:58:13.796201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.341 qpair failed and we were unable to recover it. 00:36:43.341 [2024-12-16 02:58:13.806157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.341 [2024-12-16 02:58:13.806213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.341 [2024-12-16 02:58:13.806226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.341 [2024-12-16 02:58:13.806233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.341 [2024-12-16 02:58:13.806239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.341 [2024-12-16 02:58:13.806254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.341 qpair failed and we were unable to recover it. 00:36:43.341 [2024-12-16 02:58:13.816178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.341 [2024-12-16 02:58:13.816237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.341 [2024-12-16 02:58:13.816251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.341 [2024-12-16 02:58:13.816258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.341 [2024-12-16 02:58:13.816265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.341 [2024-12-16 02:58:13.816279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.341 qpair failed and we were unable to recover it. 00:36:43.341 [2024-12-16 02:58:13.826196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.341 [2024-12-16 02:58:13.826255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.341 [2024-12-16 02:58:13.826270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.341 [2024-12-16 02:58:13.826276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.341 [2024-12-16 02:58:13.826282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.341 [2024-12-16 02:58:13.826297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.341 qpair failed and we were unable to recover it. 00:36:43.341 [2024-12-16 02:58:13.836209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.341 [2024-12-16 02:58:13.836273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.341 [2024-12-16 02:58:13.836290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.341 [2024-12-16 02:58:13.836297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.341 [2024-12-16 02:58:13.836303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.341 [2024-12-16 02:58:13.836317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.341 qpair failed and we were unable to recover it. 00:36:43.341 [2024-12-16 02:58:13.846259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.341 [2024-12-16 02:58:13.846321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.341 [2024-12-16 02:58:13.846334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.341 [2024-12-16 02:58:13.846341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.341 [2024-12-16 02:58:13.846347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.341 [2024-12-16 02:58:13.846361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.341 qpair failed and we were unable to recover it. 00:36:43.341 [2024-12-16 02:58:13.856211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.341 [2024-12-16 02:58:13.856268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.341 [2024-12-16 02:58:13.856282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.341 [2024-12-16 02:58:13.856289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.341 [2024-12-16 02:58:13.856295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.341 [2024-12-16 02:58:13.856310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.341 qpair failed and we were unable to recover it. 00:36:43.341 [2024-12-16 02:58:13.866311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.866365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.342 [2024-12-16 02:58:13.866378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.342 [2024-12-16 02:58:13.866385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.342 [2024-12-16 02:58:13.866391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.342 [2024-12-16 02:58:13.866406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.342 qpair failed and we were unable to recover it. 00:36:43.342 [2024-12-16 02:58:13.876342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.876394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.342 [2024-12-16 02:58:13.876408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.342 [2024-12-16 02:58:13.876414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.342 [2024-12-16 02:58:13.876423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.342 [2024-12-16 02:58:13.876438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.342 qpair failed and we were unable to recover it. 00:36:43.342 [2024-12-16 02:58:13.886374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.886426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.342 [2024-12-16 02:58:13.886440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.342 [2024-12-16 02:58:13.886447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.342 [2024-12-16 02:58:13.886454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.342 [2024-12-16 02:58:13.886469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.342 qpair failed and we were unable to recover it. 00:36:43.342 [2024-12-16 02:58:13.896397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.896456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.342 [2024-12-16 02:58:13.896470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.342 [2024-12-16 02:58:13.896477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.342 [2024-12-16 02:58:13.896484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.342 [2024-12-16 02:58:13.896498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.342 qpair failed and we were unable to recover it. 00:36:43.342 [2024-12-16 02:58:13.906419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.906474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.342 [2024-12-16 02:58:13.906488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.342 [2024-12-16 02:58:13.906495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.342 [2024-12-16 02:58:13.906501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.342 [2024-12-16 02:58:13.906515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.342 qpair failed and we were unable to recover it. 00:36:43.342 [2024-12-16 02:58:13.916441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.916493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.342 [2024-12-16 02:58:13.916507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.342 [2024-12-16 02:58:13.916514] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.342 [2024-12-16 02:58:13.916520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.342 [2024-12-16 02:58:13.916534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.342 qpair failed and we were unable to recover it. 00:36:43.342 [2024-12-16 02:58:13.926483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.926537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.342 [2024-12-16 02:58:13.926551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.342 [2024-12-16 02:58:13.926558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.342 [2024-12-16 02:58:13.926565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.342 [2024-12-16 02:58:13.926579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.342 qpair failed and we were unable to recover it. 00:36:43.342 [2024-12-16 02:58:13.936509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.936564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.342 [2024-12-16 02:58:13.936577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.342 [2024-12-16 02:58:13.936584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.342 [2024-12-16 02:58:13.936590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.342 [2024-12-16 02:58:13.936605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.342 qpair failed and we were unable to recover it. 00:36:43.342 [2024-12-16 02:58:13.946544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.946599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.342 [2024-12-16 02:58:13.946613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.342 [2024-12-16 02:58:13.946620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.342 [2024-12-16 02:58:13.946626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.342 [2024-12-16 02:58:13.946641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.342 qpair failed and we were unable to recover it. 00:36:43.342 [2024-12-16 02:58:13.956568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.956642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.342 [2024-12-16 02:58:13.956656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.342 [2024-12-16 02:58:13.956663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.342 [2024-12-16 02:58:13.956669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.342 [2024-12-16 02:58:13.956684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.342 qpair failed and we were unable to recover it. 00:36:43.342 [2024-12-16 02:58:13.966602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.966656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.342 [2024-12-16 02:58:13.966673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.342 [2024-12-16 02:58:13.966680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.342 [2024-12-16 02:58:13.966686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.342 [2024-12-16 02:58:13.966701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.342 qpair failed and we were unable to recover it. 00:36:43.342 [2024-12-16 02:58:13.976666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.976726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.342 [2024-12-16 02:58:13.976740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.342 [2024-12-16 02:58:13.976747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.342 [2024-12-16 02:58:13.976753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.342 [2024-12-16 02:58:13.976767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.342 qpair failed and we were unable to recover it. 00:36:43.342 [2024-12-16 02:58:13.986660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.986729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.342 [2024-12-16 02:58:13.986743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.342 [2024-12-16 02:58:13.986750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.342 [2024-12-16 02:58:13.986756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.342 [2024-12-16 02:58:13.986771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.342 qpair failed and we were unable to recover it. 00:36:43.342 [2024-12-16 02:58:13.996658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.342 [2024-12-16 02:58:13.996709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.343 [2024-12-16 02:58:13.996722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.343 [2024-12-16 02:58:13.996729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.343 [2024-12-16 02:58:13.996736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.343 [2024-12-16 02:58:13.996751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.343 qpair failed and we were unable to recover it. 00:36:43.603 [2024-12-16 02:58:14.006724] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.603 [2024-12-16 02:58:14.006781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.603 [2024-12-16 02:58:14.006794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.603 [2024-12-16 02:58:14.006801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.603 [2024-12-16 02:58:14.006811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.603 [2024-12-16 02:58:14.006825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.603 qpair failed and we were unable to recover it. 00:36:43.603 [2024-12-16 02:58:14.016807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.603 [2024-12-16 02:58:14.016878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.603 [2024-12-16 02:58:14.016892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.603 [2024-12-16 02:58:14.016899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.603 [2024-12-16 02:58:14.016905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.603 [2024-12-16 02:58:14.016920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.603 qpair failed and we were unable to recover it. 00:36:43.603 [2024-12-16 02:58:14.026815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.603 [2024-12-16 02:58:14.026874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.603 [2024-12-16 02:58:14.026889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.603 [2024-12-16 02:58:14.026896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.603 [2024-12-16 02:58:14.026902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.603 [2024-12-16 02:58:14.026916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.603 qpair failed and we were unable to recover it. 00:36:43.603 [2024-12-16 02:58:14.036855] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.603 [2024-12-16 02:58:14.036904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.603 [2024-12-16 02:58:14.036918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.603 [2024-12-16 02:58:14.036925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.603 [2024-12-16 02:58:14.036931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.603 [2024-12-16 02:58:14.036946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.603 qpair failed and we were unable to recover it. 00:36:43.603 [2024-12-16 02:58:14.046842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.603 [2024-12-16 02:58:14.046904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.603 [2024-12-16 02:58:14.046918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.603 [2024-12-16 02:58:14.046924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.603 [2024-12-16 02:58:14.046931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.603 [2024-12-16 02:58:14.046945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.603 qpair failed and we were unable to recover it. 00:36:43.603 [2024-12-16 02:58:14.056859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.603 [2024-12-16 02:58:14.056914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.603 [2024-12-16 02:58:14.056928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.603 [2024-12-16 02:58:14.056935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.603 [2024-12-16 02:58:14.056941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.603 [2024-12-16 02:58:14.056956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.603 qpair failed and we were unable to recover it. 00:36:43.603 [2024-12-16 02:58:14.066911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.603 [2024-12-16 02:58:14.066982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.603 [2024-12-16 02:58:14.066996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.603 [2024-12-16 02:58:14.067002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.603 [2024-12-16 02:58:14.067008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.603 [2024-12-16 02:58:14.067022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.603 qpair failed and we were unable to recover it. 00:36:43.603 [2024-12-16 02:58:14.076908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.603 [2024-12-16 02:58:14.076961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.603 [2024-12-16 02:58:14.076975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.603 [2024-12-16 02:58:14.076982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.603 [2024-12-16 02:58:14.076988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.603 [2024-12-16 02:58:14.077003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.603 qpair failed and we were unable to recover it. 00:36:43.603 [2024-12-16 02:58:14.086956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.603 [2024-12-16 02:58:14.087011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.603 [2024-12-16 02:58:14.087025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.603 [2024-12-16 02:58:14.087032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.603 [2024-12-16 02:58:14.087039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.603 [2024-12-16 02:58:14.087054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.603 qpair failed and we were unable to recover it. 00:36:43.603 [2024-12-16 02:58:14.096956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.603 [2024-12-16 02:58:14.097013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.603 [2024-12-16 02:58:14.097029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.603 [2024-12-16 02:58:14.097037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.603 [2024-12-16 02:58:14.097044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.603 [2024-12-16 02:58:14.097058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.603 qpair failed and we were unable to recover it. 00:36:43.603 [2024-12-16 02:58:14.106930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.603 [2024-12-16 02:58:14.106987] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.603 [2024-12-16 02:58:14.107000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.603 [2024-12-16 02:58:14.107006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.603 [2024-12-16 02:58:14.107014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.603 [2024-12-16 02:58:14.107029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.117025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.117079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.604 [2024-12-16 02:58:14.117092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.604 [2024-12-16 02:58:14.117099] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.604 [2024-12-16 02:58:14.117105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.604 [2024-12-16 02:58:14.117119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.127013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.127079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.604 [2024-12-16 02:58:14.127093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.604 [2024-12-16 02:58:14.127101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.604 [2024-12-16 02:58:14.127107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.604 [2024-12-16 02:58:14.127121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.137127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.137218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.604 [2024-12-16 02:58:14.137232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.604 [2024-12-16 02:58:14.137239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.604 [2024-12-16 02:58:14.137248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.604 [2024-12-16 02:58:14.137262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.147116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.147171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.604 [2024-12-16 02:58:14.147185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.604 [2024-12-16 02:58:14.147191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.604 [2024-12-16 02:58:14.147198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.604 [2024-12-16 02:58:14.147212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.157144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.157200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.604 [2024-12-16 02:58:14.157214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.604 [2024-12-16 02:58:14.157221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.604 [2024-12-16 02:58:14.157228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.604 [2024-12-16 02:58:14.157243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.167178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.167233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.604 [2024-12-16 02:58:14.167247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.604 [2024-12-16 02:58:14.167254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.604 [2024-12-16 02:58:14.167260] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.604 [2024-12-16 02:58:14.167274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.177190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.177246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.604 [2024-12-16 02:58:14.177260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.604 [2024-12-16 02:58:14.177266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.604 [2024-12-16 02:58:14.177273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.604 [2024-12-16 02:58:14.177287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.187217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.187271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.604 [2024-12-16 02:58:14.187284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.604 [2024-12-16 02:58:14.187291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.604 [2024-12-16 02:58:14.187297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.604 [2024-12-16 02:58:14.187311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.197241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.197295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.604 [2024-12-16 02:58:14.197308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.604 [2024-12-16 02:58:14.197315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.604 [2024-12-16 02:58:14.197322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.604 [2024-12-16 02:58:14.197336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.207192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.207251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.604 [2024-12-16 02:58:14.207265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.604 [2024-12-16 02:58:14.207271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.604 [2024-12-16 02:58:14.207277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.604 [2024-12-16 02:58:14.207292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.217312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.217368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.604 [2024-12-16 02:58:14.217381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.604 [2024-12-16 02:58:14.217388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.604 [2024-12-16 02:58:14.217394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.604 [2024-12-16 02:58:14.217408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.227334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.227385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.604 [2024-12-16 02:58:14.227402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.604 [2024-12-16 02:58:14.227409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.604 [2024-12-16 02:58:14.227416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.604 [2024-12-16 02:58:14.227430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.237356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.237407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.604 [2024-12-16 02:58:14.237421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.604 [2024-12-16 02:58:14.237428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.604 [2024-12-16 02:58:14.237435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.604 [2024-12-16 02:58:14.237450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.604 qpair failed and we were unable to recover it. 00:36:43.604 [2024-12-16 02:58:14.247404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.604 [2024-12-16 02:58:14.247459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.605 [2024-12-16 02:58:14.247473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.605 [2024-12-16 02:58:14.247479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.605 [2024-12-16 02:58:14.247486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.605 [2024-12-16 02:58:14.247501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.605 qpair failed and we were unable to recover it. 00:36:43.605 [2024-12-16 02:58:14.257415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.605 [2024-12-16 02:58:14.257471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.605 [2024-12-16 02:58:14.257485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.605 [2024-12-16 02:58:14.257491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.605 [2024-12-16 02:58:14.257498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.605 [2024-12-16 02:58:14.257512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.605 qpair failed and we were unable to recover it. 00:36:43.865 [2024-12-16 02:58:14.267446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.865 [2024-12-16 02:58:14.267499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.865 [2024-12-16 02:58:14.267513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.865 [2024-12-16 02:58:14.267520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.865 [2024-12-16 02:58:14.267530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.865 [2024-12-16 02:58:14.267544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.865 qpair failed and we were unable to recover it. 00:36:43.865 [2024-12-16 02:58:14.277476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.865 [2024-12-16 02:58:14.277536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.865 [2024-12-16 02:58:14.277550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.865 [2024-12-16 02:58:14.277557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.865 [2024-12-16 02:58:14.277563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.865 [2024-12-16 02:58:14.277577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.865 qpair failed and we were unable to recover it. 00:36:43.865 [2024-12-16 02:58:14.287513] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.865 [2024-12-16 02:58:14.287568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.865 [2024-12-16 02:58:14.287582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.865 [2024-12-16 02:58:14.287588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.865 [2024-12-16 02:58:14.287594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.865 [2024-12-16 02:58:14.287609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.865 qpair failed and we were unable to recover it. 00:36:43.865 [2024-12-16 02:58:14.297546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.865 [2024-12-16 02:58:14.297610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.865 [2024-12-16 02:58:14.297624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.865 [2024-12-16 02:58:14.297631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.865 [2024-12-16 02:58:14.297637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.865 [2024-12-16 02:58:14.297651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.865 qpair failed and we were unable to recover it. 00:36:43.865 [2024-12-16 02:58:14.307624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.865 [2024-12-16 02:58:14.307681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.865 [2024-12-16 02:58:14.307694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.865 [2024-12-16 02:58:14.307701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.865 [2024-12-16 02:58:14.307707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.865 [2024-12-16 02:58:14.307721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.865 qpair failed and we were unable to recover it. 00:36:43.865 [2024-12-16 02:58:14.317520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.865 [2024-12-16 02:58:14.317575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.865 [2024-12-16 02:58:14.317588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.865 [2024-12-16 02:58:14.317595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.865 [2024-12-16 02:58:14.317602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.865 [2024-12-16 02:58:14.317616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.865 qpair failed and we were unable to recover it. 00:36:43.865 [2024-12-16 02:58:14.327552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.865 [2024-12-16 02:58:14.327611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.865 [2024-12-16 02:58:14.327626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.865 [2024-12-16 02:58:14.327634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.865 [2024-12-16 02:58:14.327641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.865 [2024-12-16 02:58:14.327656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.337659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.337717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.866 [2024-12-16 02:58:14.337731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.866 [2024-12-16 02:58:14.337739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.866 [2024-12-16 02:58:14.337746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.866 [2024-12-16 02:58:14.337760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.347678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.347736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.866 [2024-12-16 02:58:14.347750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.866 [2024-12-16 02:58:14.347758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.866 [2024-12-16 02:58:14.347765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.866 [2024-12-16 02:58:14.347779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.357758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.357813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.866 [2024-12-16 02:58:14.357829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.866 [2024-12-16 02:58:14.357836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.866 [2024-12-16 02:58:14.357842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.866 [2024-12-16 02:58:14.357861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.367760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.367826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.866 [2024-12-16 02:58:14.367840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.866 [2024-12-16 02:58:14.367852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.866 [2024-12-16 02:58:14.367859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.866 [2024-12-16 02:58:14.367874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.377767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.377821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.866 [2024-12-16 02:58:14.377834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.866 [2024-12-16 02:58:14.377841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.866 [2024-12-16 02:58:14.377857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.866 [2024-12-16 02:58:14.377872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.387710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.387762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.866 [2024-12-16 02:58:14.387775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.866 [2024-12-16 02:58:14.387782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.866 [2024-12-16 02:58:14.387789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.866 [2024-12-16 02:58:14.387802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.397815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.397867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.866 [2024-12-16 02:58:14.397882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.866 [2024-12-16 02:58:14.397888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.866 [2024-12-16 02:58:14.397898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.866 [2024-12-16 02:58:14.397913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.407867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.407923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.866 [2024-12-16 02:58:14.407938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.866 [2024-12-16 02:58:14.407945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.866 [2024-12-16 02:58:14.407951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.866 [2024-12-16 02:58:14.407966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.417813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.417867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.866 [2024-12-16 02:58:14.417881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.866 [2024-12-16 02:58:14.417887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.866 [2024-12-16 02:58:14.417894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.866 [2024-12-16 02:58:14.417909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.427904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.427978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.866 [2024-12-16 02:58:14.427992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.866 [2024-12-16 02:58:14.427999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.866 [2024-12-16 02:58:14.428005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.866 [2024-12-16 02:58:14.428020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.437947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.438020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.866 [2024-12-16 02:58:14.438035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.866 [2024-12-16 02:58:14.438042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.866 [2024-12-16 02:58:14.438048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.866 [2024-12-16 02:58:14.438063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.447970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.448031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.866 [2024-12-16 02:58:14.448045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.866 [2024-12-16 02:58:14.448052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.866 [2024-12-16 02:58:14.448059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.866 [2024-12-16 02:58:14.448073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.457917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.457970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.866 [2024-12-16 02:58:14.457984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.866 [2024-12-16 02:58:14.457991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.866 [2024-12-16 02:58:14.457997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.866 [2024-12-16 02:58:14.458011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.866 qpair failed and we were unable to recover it. 00:36:43.866 [2024-12-16 02:58:14.468028] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.866 [2024-12-16 02:58:14.468091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.867 [2024-12-16 02:58:14.468105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.867 [2024-12-16 02:58:14.468112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.867 [2024-12-16 02:58:14.468118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.867 [2024-12-16 02:58:14.468132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.867 qpair failed and we were unable to recover it. 00:36:43.867 [2024-12-16 02:58:14.478047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.867 [2024-12-16 02:58:14.478101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.867 [2024-12-16 02:58:14.478114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.867 [2024-12-16 02:58:14.478121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.867 [2024-12-16 02:58:14.478127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.867 [2024-12-16 02:58:14.478141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.867 qpair failed and we were unable to recover it. 00:36:43.867 [2024-12-16 02:58:14.488081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.867 [2024-12-16 02:58:14.488135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.867 [2024-12-16 02:58:14.488152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.867 [2024-12-16 02:58:14.488159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.867 [2024-12-16 02:58:14.488165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.867 [2024-12-16 02:58:14.488179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.867 qpair failed and we were unable to recover it. 00:36:43.867 [2024-12-16 02:58:14.498112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.867 [2024-12-16 02:58:14.498175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.867 [2024-12-16 02:58:14.498189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.867 [2024-12-16 02:58:14.498196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.867 [2024-12-16 02:58:14.498202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.867 [2024-12-16 02:58:14.498217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.867 qpair failed and we were unable to recover it. 00:36:43.867 [2024-12-16 02:58:14.508132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.867 [2024-12-16 02:58:14.508212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.867 [2024-12-16 02:58:14.508226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.867 [2024-12-16 02:58:14.508232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.867 [2024-12-16 02:58:14.508238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.867 [2024-12-16 02:58:14.508252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.867 qpair failed and we were unable to recover it. 00:36:43.867 [2024-12-16 02:58:14.518196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:43.867 [2024-12-16 02:58:14.518250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:43.867 [2024-12-16 02:58:14.518266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:43.867 [2024-12-16 02:58:14.518273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:43.867 [2024-12-16 02:58:14.518279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:43.867 [2024-12-16 02:58:14.518295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:43.867 qpair failed and we were unable to recover it. 00:36:44.127 [2024-12-16 02:58:14.528197] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.127 [2024-12-16 02:58:14.528256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.127 [2024-12-16 02:58:14.528270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.127 [2024-12-16 02:58:14.528278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.127 [2024-12-16 02:58:14.528288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.127 [2024-12-16 02:58:14.528303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.538231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.128 [2024-12-16 02:58:14.538289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.128 [2024-12-16 02:58:14.538304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.128 [2024-12-16 02:58:14.538310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.128 [2024-12-16 02:58:14.538317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.128 [2024-12-16 02:58:14.538331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.548247] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.128 [2024-12-16 02:58:14.548310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.128 [2024-12-16 02:58:14.548324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.128 [2024-12-16 02:58:14.548331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.128 [2024-12-16 02:58:14.548337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.128 [2024-12-16 02:58:14.548352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.558279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.128 [2024-12-16 02:58:14.558329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.128 [2024-12-16 02:58:14.558342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.128 [2024-12-16 02:58:14.558349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.128 [2024-12-16 02:58:14.558355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.128 [2024-12-16 02:58:14.558369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.568319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.128 [2024-12-16 02:58:14.568375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.128 [2024-12-16 02:58:14.568389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.128 [2024-12-16 02:58:14.568395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.128 [2024-12-16 02:58:14.568401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.128 [2024-12-16 02:58:14.568415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.578284] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.128 [2024-12-16 02:58:14.578379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.128 [2024-12-16 02:58:14.578393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.128 [2024-12-16 02:58:14.578399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.128 [2024-12-16 02:58:14.578406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.128 [2024-12-16 02:58:14.578420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.588364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.128 [2024-12-16 02:58:14.588450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.128 [2024-12-16 02:58:14.588464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.128 [2024-12-16 02:58:14.588471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.128 [2024-12-16 02:58:14.588477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.128 [2024-12-16 02:58:14.588492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.598386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.128 [2024-12-16 02:58:14.598439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.128 [2024-12-16 02:58:14.598452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.128 [2024-12-16 02:58:14.598459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.128 [2024-12-16 02:58:14.598465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.128 [2024-12-16 02:58:14.598479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.608420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.128 [2024-12-16 02:58:14.608476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.128 [2024-12-16 02:58:14.608490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.128 [2024-12-16 02:58:14.608496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.128 [2024-12-16 02:58:14.608503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.128 [2024-12-16 02:58:14.608517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.618439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.128 [2024-12-16 02:58:14.618498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.128 [2024-12-16 02:58:14.618515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.128 [2024-12-16 02:58:14.618522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.128 [2024-12-16 02:58:14.618528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.128 [2024-12-16 02:58:14.618542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.628476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.128 [2024-12-16 02:58:14.628550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.128 [2024-12-16 02:58:14.628565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.128 [2024-12-16 02:58:14.628571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.128 [2024-12-16 02:58:14.628577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.128 [2024-12-16 02:58:14.628591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.638494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.128 [2024-12-16 02:58:14.638547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.128 [2024-12-16 02:58:14.638560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.128 [2024-12-16 02:58:14.638567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.128 [2024-12-16 02:58:14.638573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.128 [2024-12-16 02:58:14.638587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.648543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.128 [2024-12-16 02:58:14.648596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.128 [2024-12-16 02:58:14.648610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.128 [2024-12-16 02:58:14.648617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.128 [2024-12-16 02:58:14.648624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.128 [2024-12-16 02:58:14.648638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.658530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.128 [2024-12-16 02:58:14.658585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.128 [2024-12-16 02:58:14.658598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.128 [2024-12-16 02:58:14.658606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.128 [2024-12-16 02:58:14.658615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.128 [2024-12-16 02:58:14.658629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.128 qpair failed and we were unable to recover it. 00:36:44.128 [2024-12-16 02:58:14.668587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.129 [2024-12-16 02:58:14.668637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.129 [2024-12-16 02:58:14.668650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.129 [2024-12-16 02:58:14.668657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.129 [2024-12-16 02:58:14.668663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.129 [2024-12-16 02:58:14.668678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-16 02:58:14.678613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.129 [2024-12-16 02:58:14.678673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.129 [2024-12-16 02:58:14.678687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.129 [2024-12-16 02:58:14.678694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.129 [2024-12-16 02:58:14.678700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.129 [2024-12-16 02:58:14.678715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-16 02:58:14.688669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.129 [2024-12-16 02:58:14.688725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.129 [2024-12-16 02:58:14.688738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.129 [2024-12-16 02:58:14.688745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.129 [2024-12-16 02:58:14.688752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.129 [2024-12-16 02:58:14.688766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-16 02:58:14.698697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.129 [2024-12-16 02:58:14.698761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.129 [2024-12-16 02:58:14.698774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.129 [2024-12-16 02:58:14.698782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.129 [2024-12-16 02:58:14.698788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.129 [2024-12-16 02:58:14.698802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-16 02:58:14.708705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.129 [2024-12-16 02:58:14.708765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.129 [2024-12-16 02:58:14.708779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.129 [2024-12-16 02:58:14.708786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.129 [2024-12-16 02:58:14.708792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.129 [2024-12-16 02:58:14.708807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-16 02:58:14.718732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.129 [2024-12-16 02:58:14.718786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.129 [2024-12-16 02:58:14.718800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.129 [2024-12-16 02:58:14.718807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.129 [2024-12-16 02:58:14.718814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.129 [2024-12-16 02:58:14.718828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-16 02:58:14.728817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.129 [2024-12-16 02:58:14.728895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.129 [2024-12-16 02:58:14.728909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.129 [2024-12-16 02:58:14.728916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.129 [2024-12-16 02:58:14.728923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.129 [2024-12-16 02:58:14.728937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-16 02:58:14.738801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.129 [2024-12-16 02:58:14.738894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.129 [2024-12-16 02:58:14.738908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.129 [2024-12-16 02:58:14.738915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.129 [2024-12-16 02:58:14.738921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.129 [2024-12-16 02:58:14.738935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-16 02:58:14.748823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.129 [2024-12-16 02:58:14.748884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.129 [2024-12-16 02:58:14.748903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.129 [2024-12-16 02:58:14.748910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.129 [2024-12-16 02:58:14.748916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.129 [2024-12-16 02:58:14.748930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-16 02:58:14.758850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.129 [2024-12-16 02:58:14.758904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.129 [2024-12-16 02:58:14.758918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.129 [2024-12-16 02:58:14.758925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.129 [2024-12-16 02:58:14.758931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.129 [2024-12-16 02:58:14.758946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-16 02:58:14.768893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.129 [2024-12-16 02:58:14.768965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.129 [2024-12-16 02:58:14.768979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.129 [2024-12-16 02:58:14.768986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.129 [2024-12-16 02:58:14.768993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.129 [2024-12-16 02:58:14.769006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.129 [2024-12-16 02:58:14.778915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.129 [2024-12-16 02:58:14.778975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.129 [2024-12-16 02:58:14.778989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.129 [2024-12-16 02:58:14.778997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.129 [2024-12-16 02:58:14.779003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.129 [2024-12-16 02:58:14.779018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.129 qpair failed and we were unable to recover it. 00:36:44.390 [2024-12-16 02:58:14.788938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.390 [2024-12-16 02:58:14.789014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.390 [2024-12-16 02:58:14.789028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.390 [2024-12-16 02:58:14.789035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.390 [2024-12-16 02:58:14.789045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.390 [2024-12-16 02:58:14.789060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.390 qpair failed and we were unable to recover it. 00:36:44.390 [2024-12-16 02:58:14.798962] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.390 [2024-12-16 02:58:14.799050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.390 [2024-12-16 02:58:14.799063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.390 [2024-12-16 02:58:14.799069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.390 [2024-12-16 02:58:14.799076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.390 [2024-12-16 02:58:14.799090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.390 qpair failed and we were unable to recover it. 00:36:44.390 [2024-12-16 02:58:14.808939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.390 [2024-12-16 02:58:14.808994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.390 [2024-12-16 02:58:14.809008] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.390 [2024-12-16 02:58:14.809014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.390 [2024-12-16 02:58:14.809021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.390 [2024-12-16 02:58:14.809035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.390 qpair failed and we were unable to recover it. 00:36:44.390 [2024-12-16 02:58:14.819044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.390 [2024-12-16 02:58:14.819099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.390 [2024-12-16 02:58:14.819112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.390 [2024-12-16 02:58:14.819119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.390 [2024-12-16 02:58:14.819125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.390 [2024-12-16 02:58:14.819139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.390 qpair failed and we were unable to recover it. 00:36:44.390 [2024-12-16 02:58:14.829070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.390 [2024-12-16 02:58:14.829157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.390 [2024-12-16 02:58:14.829171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.390 [2024-12-16 02:58:14.829178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.390 [2024-12-16 02:58:14.829184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.390 [2024-12-16 02:58:14.829198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.390 qpair failed and we were unable to recover it. 00:36:44.390 [2024-12-16 02:58:14.839061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.390 [2024-12-16 02:58:14.839118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.390 [2024-12-16 02:58:14.839132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.390 [2024-12-16 02:58:14.839139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.390 [2024-12-16 02:58:14.839145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.390 [2024-12-16 02:58:14.839160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.390 qpair failed and we were unable to recover it. 00:36:44.390 [2024-12-16 02:58:14.849116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.390 [2024-12-16 02:58:14.849172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.390 [2024-12-16 02:58:14.849186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.390 [2024-12-16 02:58:14.849192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.390 [2024-12-16 02:58:14.849199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.390 [2024-12-16 02:58:14.849213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.390 qpair failed and we were unable to recover it. 00:36:44.390 [2024-12-16 02:58:14.859147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.390 [2024-12-16 02:58:14.859200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.390 [2024-12-16 02:58:14.859213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.390 [2024-12-16 02:58:14.859220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.390 [2024-12-16 02:58:14.859226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.390 [2024-12-16 02:58:14.859241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.390 qpair failed and we were unable to recover it. 00:36:44.390 [2024-12-16 02:58:14.869189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.390 [2024-12-16 02:58:14.869248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.390 [2024-12-16 02:58:14.869262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.390 [2024-12-16 02:58:14.869269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.390 [2024-12-16 02:58:14.869275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.390 [2024-12-16 02:58:14.869288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.390 qpair failed and we were unable to recover it. 00:36:44.390 [2024-12-16 02:58:14.879186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.390 [2024-12-16 02:58:14.879240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.390 [2024-12-16 02:58:14.879257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:14.879264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:14.879271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.391 [2024-12-16 02:58:14.879285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.391 qpair failed and we were unable to recover it. 00:36:44.391 [2024-12-16 02:58:14.889210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.391 [2024-12-16 02:58:14.889266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.391 [2024-12-16 02:58:14.889280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:14.889286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:14.889292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.391 [2024-12-16 02:58:14.889307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.391 qpair failed and we were unable to recover it. 00:36:44.391 [2024-12-16 02:58:14.899206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.391 [2024-12-16 02:58:14.899261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.391 [2024-12-16 02:58:14.899275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:14.899282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:14.899288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.391 [2024-12-16 02:58:14.899303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.391 qpair failed and we were unable to recover it. 00:36:44.391 [2024-12-16 02:58:14.909297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.391 [2024-12-16 02:58:14.909352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.391 [2024-12-16 02:58:14.909366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:14.909373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:14.909379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.391 [2024-12-16 02:58:14.909393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.391 qpair failed and we were unable to recover it. 00:36:44.391 [2024-12-16 02:58:14.919311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.391 [2024-12-16 02:58:14.919364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.391 [2024-12-16 02:58:14.919377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:14.919384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:14.919393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.391 [2024-12-16 02:58:14.919408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.391 qpair failed and we were unable to recover it. 00:36:44.391 [2024-12-16 02:58:14.929281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.391 [2024-12-16 02:58:14.929345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.391 [2024-12-16 02:58:14.929360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:14.929367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:14.929373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.391 [2024-12-16 02:58:14.929387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.391 qpair failed and we were unable to recover it. 00:36:44.391 [2024-12-16 02:58:14.939350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.391 [2024-12-16 02:58:14.939429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.391 [2024-12-16 02:58:14.939443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:14.939450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:14.939456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.391 [2024-12-16 02:58:14.939471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.391 qpair failed and we were unable to recover it. 00:36:44.391 [2024-12-16 02:58:14.949431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.391 [2024-12-16 02:58:14.949485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.391 [2024-12-16 02:58:14.949499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:14.949506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:14.949512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.391 [2024-12-16 02:58:14.949526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.391 qpair failed and we were unable to recover it. 00:36:44.391 [2024-12-16 02:58:14.959361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.391 [2024-12-16 02:58:14.959415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.391 [2024-12-16 02:58:14.959428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:14.959434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:14.959441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.391 [2024-12-16 02:58:14.959456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.391 qpair failed and we were unable to recover it. 00:36:44.391 [2024-12-16 02:58:14.969463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.391 [2024-12-16 02:58:14.969520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.391 [2024-12-16 02:58:14.969533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:14.969540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:14.969546] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.391 [2024-12-16 02:58:14.969561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.391 qpair failed and we were unable to recover it. 00:36:44.391 [2024-12-16 02:58:14.979511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.391 [2024-12-16 02:58:14.979571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.391 [2024-12-16 02:58:14.979585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:14.979592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:14.979599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.391 [2024-12-16 02:58:14.979614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.391 qpair failed and we were unable to recover it. 00:36:44.391 [2024-12-16 02:58:14.989494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.391 [2024-12-16 02:58:14.989567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.391 [2024-12-16 02:58:14.989581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:14.989588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:14.989594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.391 [2024-12-16 02:58:14.989609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.391 qpair failed and we were unable to recover it. 00:36:44.391 [2024-12-16 02:58:14.999553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.391 [2024-12-16 02:58:14.999608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.391 [2024-12-16 02:58:14.999621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:14.999627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:14.999635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.391 [2024-12-16 02:58:14.999650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.391 qpair failed and we were unable to recover it. 00:36:44.391 [2024-12-16 02:58:15.009584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.391 [2024-12-16 02:58:15.009657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.391 [2024-12-16 02:58:15.009674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.391 [2024-12-16 02:58:15.009681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.391 [2024-12-16 02:58:15.009687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.392 [2024-12-16 02:58:15.009702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.392 qpair failed and we were unable to recover it. 00:36:44.392 [2024-12-16 02:58:15.019555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.392 [2024-12-16 02:58:15.019640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.392 [2024-12-16 02:58:15.019654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.392 [2024-12-16 02:58:15.019660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.392 [2024-12-16 02:58:15.019666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.392 [2024-12-16 02:58:15.019680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.392 qpair failed and we were unable to recover it. 00:36:44.392 [2024-12-16 02:58:15.029632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.392 [2024-12-16 02:58:15.029689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.392 [2024-12-16 02:58:15.029703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.392 [2024-12-16 02:58:15.029711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.392 [2024-12-16 02:58:15.029717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.392 [2024-12-16 02:58:15.029731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.392 qpair failed and we were unable to recover it. 00:36:44.392 [2024-12-16 02:58:15.039639] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.392 [2024-12-16 02:58:15.039697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.392 [2024-12-16 02:58:15.039711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.392 [2024-12-16 02:58:15.039718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.392 [2024-12-16 02:58:15.039725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.392 [2024-12-16 02:58:15.039739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.392 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.049685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.049741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.049755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.049762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.049772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.049787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.059683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.059738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.059752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.059759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.059766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.059780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.069735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.069786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.069799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.069806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.069812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.069826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.079735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.079791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.079804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.079811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.079818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.079831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.089792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.089852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.089866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.089873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.089879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.089894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.099835] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.099921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.099936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.099943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.099949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.099963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.109836] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.109894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.109909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.109917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.109924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.109938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.119866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.119917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.119931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.119938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.119944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.119959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.129918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.129976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.129990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.129997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.130004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.130019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.139915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.139969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.139986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.139993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.139999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.140014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.150003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.150068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.150082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.150089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.150095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.150110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.160007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.160062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.160076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.160083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.160089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.160103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.170019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.170075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.170089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.170095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.170101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.170115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.179987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.180043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.180057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.180063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.180072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.180087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.190080] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.190132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.190147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.190154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.190160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.190174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.200107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.200160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.200174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.200180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.200187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.200201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.210144] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.210202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.210216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.210222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.210229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.210243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.220100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.220159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.220173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.220180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.220186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.653 [2024-12-16 02:58:15.220200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.653 qpair failed and we were unable to recover it. 00:36:44.653 [2024-12-16 02:58:15.230190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.653 [2024-12-16 02:58:15.230243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.653 [2024-12-16 02:58:15.230257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.653 [2024-12-16 02:58:15.230264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.653 [2024-12-16 02:58:15.230271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.654 [2024-12-16 02:58:15.230285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.654 qpair failed and we were unable to recover it. 00:36:44.654 [2024-12-16 02:58:15.240218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.654 [2024-12-16 02:58:15.240269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.654 [2024-12-16 02:58:15.240283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.654 [2024-12-16 02:58:15.240290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.654 [2024-12-16 02:58:15.240296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.654 [2024-12-16 02:58:15.240311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.654 qpair failed and we were unable to recover it. 00:36:44.654 [2024-12-16 02:58:15.250259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.654 [2024-12-16 02:58:15.250317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.654 [2024-12-16 02:58:15.250331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.654 [2024-12-16 02:58:15.250338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.654 [2024-12-16 02:58:15.250344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.654 [2024-12-16 02:58:15.250359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.654 qpair failed and we were unable to recover it. 00:36:44.654 [2024-12-16 02:58:15.260317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.654 [2024-12-16 02:58:15.260370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.654 [2024-12-16 02:58:15.260383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.654 [2024-12-16 02:58:15.260390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.654 [2024-12-16 02:58:15.260396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.654 [2024-12-16 02:58:15.260411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.654 qpair failed and we were unable to recover it. 00:36:44.654 [2024-12-16 02:58:15.270303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.654 [2024-12-16 02:58:15.270357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.654 [2024-12-16 02:58:15.270374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.654 [2024-12-16 02:58:15.270381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.654 [2024-12-16 02:58:15.270388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.654 [2024-12-16 02:58:15.270401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.654 qpair failed and we were unable to recover it. 00:36:44.654 [2024-12-16 02:58:15.280323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.654 [2024-12-16 02:58:15.280376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.654 [2024-12-16 02:58:15.280390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.654 [2024-12-16 02:58:15.280397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.654 [2024-12-16 02:58:15.280403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.654 [2024-12-16 02:58:15.280417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.654 qpair failed and we were unable to recover it. 00:36:44.654 [2024-12-16 02:58:15.290369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.654 [2024-12-16 02:58:15.290430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.654 [2024-12-16 02:58:15.290444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.654 [2024-12-16 02:58:15.290451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.654 [2024-12-16 02:58:15.290457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.654 [2024-12-16 02:58:15.290471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.654 qpair failed and we were unable to recover it. 00:36:44.654 [2024-12-16 02:58:15.300370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.654 [2024-12-16 02:58:15.300425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.654 [2024-12-16 02:58:15.300438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.654 [2024-12-16 02:58:15.300445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.654 [2024-12-16 02:58:15.300451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.654 [2024-12-16 02:58:15.300466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.654 qpair failed and we were unable to recover it. 00:36:44.654 [2024-12-16 02:58:15.310412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.654 [2024-12-16 02:58:15.310466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.654 [2024-12-16 02:58:15.310479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.654 [2024-12-16 02:58:15.310486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.654 [2024-12-16 02:58:15.310495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.654 [2024-12-16 02:58:15.310509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.654 qpair failed and we were unable to recover it. 00:36:44.914 [2024-12-16 02:58:15.320433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.914 [2024-12-16 02:58:15.320491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.914 [2024-12-16 02:58:15.320504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.914 [2024-12-16 02:58:15.320511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.914 [2024-12-16 02:58:15.320517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.914 [2024-12-16 02:58:15.320533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.914 qpair failed and we were unable to recover it. 00:36:44.914 [2024-12-16 02:58:15.330537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.914 [2024-12-16 02:58:15.330598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.914 [2024-12-16 02:58:15.330613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.914 [2024-12-16 02:58:15.330621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.914 [2024-12-16 02:58:15.330627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.914 [2024-12-16 02:58:15.330642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.914 qpair failed and we were unable to recover it. 00:36:44.914 [2024-12-16 02:58:15.340562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.914 [2024-12-16 02:58:15.340618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.914 [2024-12-16 02:58:15.340632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.914 [2024-12-16 02:58:15.340638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.914 [2024-12-16 02:58:15.340645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.914 [2024-12-16 02:58:15.340659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.914 qpair failed and we were unable to recover it. 00:36:44.914 [2024-12-16 02:58:15.350527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.914 [2024-12-16 02:58:15.350581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.914 [2024-12-16 02:58:15.350595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.914 [2024-12-16 02:58:15.350601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.914 [2024-12-16 02:58:15.350608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.914 [2024-12-16 02:58:15.350622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.914 qpair failed and we were unable to recover it. 00:36:44.914 [2024-12-16 02:58:15.360552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.360604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.360618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.360625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.360631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.915 [2024-12-16 02:58:15.360645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.915 qpair failed and we were unable to recover it. 00:36:44.915 [2024-12-16 02:58:15.370596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.370661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.370675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.370681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.370688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.915 [2024-12-16 02:58:15.370702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.915 qpair failed and we were unable to recover it. 00:36:44.915 [2024-12-16 02:58:15.380588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.380658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.380672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.380679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.380685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.915 [2024-12-16 02:58:15.380699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.915 qpair failed and we were unable to recover it. 00:36:44.915 [2024-12-16 02:58:15.390652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.390709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.390723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.390730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.390736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.915 [2024-12-16 02:58:15.390750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.915 qpair failed and we were unable to recover it. 00:36:44.915 [2024-12-16 02:58:15.400659] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.400716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.400733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.400740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.400747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.915 [2024-12-16 02:58:15.400761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.915 qpair failed and we were unable to recover it. 00:36:44.915 [2024-12-16 02:58:15.410712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.410777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.410791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.410798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.410805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.915 [2024-12-16 02:58:15.410820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.915 qpair failed and we were unable to recover it. 00:36:44.915 [2024-12-16 02:58:15.420717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.420767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.420784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.420791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.420797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.915 [2024-12-16 02:58:15.420813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.915 qpair failed and we were unable to recover it. 00:36:44.915 [2024-12-16 02:58:15.430796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.430856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.430872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.430879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.430885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.915 [2024-12-16 02:58:15.430900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.915 qpair failed and we were unable to recover it. 00:36:44.915 [2024-12-16 02:58:15.440713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.440780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.440794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.440801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.440810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.915 [2024-12-16 02:58:15.440825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.915 qpair failed and we were unable to recover it. 00:36:44.915 [2024-12-16 02:58:15.450810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.450869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.450884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.450891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.450897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.915 [2024-12-16 02:58:15.450912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.915 qpair failed and we were unable to recover it. 00:36:44.915 [2024-12-16 02:58:15.460758] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.460816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.460830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.460837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.460843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.915 [2024-12-16 02:58:15.460863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.915 qpair failed and we were unable to recover it. 00:36:44.915 [2024-12-16 02:58:15.470863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.470916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.470929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.470936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.470942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.915 [2024-12-16 02:58:15.470956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.915 qpair failed and we were unable to recover it. 00:36:44.915 [2024-12-16 02:58:15.480893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.480970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.480984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.480991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.480997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.915 [2024-12-16 02:58:15.481012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.915 qpair failed and we were unable to recover it. 00:36:44.915 [2024-12-16 02:58:15.490941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.915 [2024-12-16 02:58:15.491008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.915 [2024-12-16 02:58:15.491022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.915 [2024-12-16 02:58:15.491029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.915 [2024-12-16 02:58:15.491036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.916 [2024-12-16 02:58:15.491050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.916 qpair failed and we were unable to recover it. 00:36:44.916 [2024-12-16 02:58:15.500894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.916 [2024-12-16 02:58:15.500952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.916 [2024-12-16 02:58:15.500965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.916 [2024-12-16 02:58:15.500972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.916 [2024-12-16 02:58:15.500978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.916 [2024-12-16 02:58:15.500992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.916 qpair failed and we were unable to recover it. 00:36:44.916 [2024-12-16 02:58:15.510981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.916 [2024-12-16 02:58:15.511039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.916 [2024-12-16 02:58:15.511053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.916 [2024-12-16 02:58:15.511059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.916 [2024-12-16 02:58:15.511065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.916 [2024-12-16 02:58:15.511080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.916 qpair failed and we were unable to recover it. 00:36:44.916 [2024-12-16 02:58:15.521018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.916 [2024-12-16 02:58:15.521086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.916 [2024-12-16 02:58:15.521102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.916 [2024-12-16 02:58:15.521109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.916 [2024-12-16 02:58:15.521116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.916 [2024-12-16 02:58:15.521132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.916 qpair failed and we were unable to recover it. 00:36:44.916 [2024-12-16 02:58:15.531053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.916 [2024-12-16 02:58:15.531113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.916 [2024-12-16 02:58:15.531130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.916 [2024-12-16 02:58:15.531137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.916 [2024-12-16 02:58:15.531144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.916 [2024-12-16 02:58:15.531159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.916 qpair failed and we were unable to recover it. 00:36:44.916 [2024-12-16 02:58:15.541055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.916 [2024-12-16 02:58:15.541137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.916 [2024-12-16 02:58:15.541151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.916 [2024-12-16 02:58:15.541158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.916 [2024-12-16 02:58:15.541164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.916 [2024-12-16 02:58:15.541179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.916 qpair failed and we were unable to recover it. 00:36:44.916 [2024-12-16 02:58:15.551089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.916 [2024-12-16 02:58:15.551138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.916 [2024-12-16 02:58:15.551151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.916 [2024-12-16 02:58:15.551158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.916 [2024-12-16 02:58:15.551164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.916 [2024-12-16 02:58:15.551179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.916 qpair failed and we were unable to recover it. 00:36:44.916 [2024-12-16 02:58:15.561122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.916 [2024-12-16 02:58:15.561173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.916 [2024-12-16 02:58:15.561187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.916 [2024-12-16 02:58:15.561194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.916 [2024-12-16 02:58:15.561201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.916 [2024-12-16 02:58:15.561214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.916 qpair failed and we were unable to recover it. 00:36:44.916 [2024-12-16 02:58:15.571158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:44.916 [2024-12-16 02:58:15.571212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:44.916 [2024-12-16 02:58:15.571226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:44.916 [2024-12-16 02:58:15.571232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:44.916 [2024-12-16 02:58:15.571242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:44.916 [2024-12-16 02:58:15.571256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:44.916 qpair failed and we were unable to recover it. 00:36:45.176 [2024-12-16 02:58:15.581182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.176 [2024-12-16 02:58:15.581233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.176 [2024-12-16 02:58:15.581248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.176 [2024-12-16 02:58:15.581255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.176 [2024-12-16 02:58:15.581261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.176 [2024-12-16 02:58:15.581275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.176 qpair failed and we were unable to recover it. 00:36:45.176 [2024-12-16 02:58:15.591243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.176 [2024-12-16 02:58:15.591326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.176 [2024-12-16 02:58:15.591340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.176 [2024-12-16 02:58:15.591347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.176 [2024-12-16 02:58:15.591353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.176 [2024-12-16 02:58:15.591367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.176 qpair failed and we were unable to recover it. 00:36:45.176 [2024-12-16 02:58:15.601231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.176 [2024-12-16 02:58:15.601297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.176 [2024-12-16 02:58:15.601310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.176 [2024-12-16 02:58:15.601317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.601323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.601338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.611275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.177 [2024-12-16 02:58:15.611334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.177 [2024-12-16 02:58:15.611347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.177 [2024-12-16 02:58:15.611354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.611361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.611375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.621397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.177 [2024-12-16 02:58:15.621461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.177 [2024-12-16 02:58:15.621474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.177 [2024-12-16 02:58:15.621480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.621487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.621501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.631364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.177 [2024-12-16 02:58:15.631422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.177 [2024-12-16 02:58:15.631435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.177 [2024-12-16 02:58:15.631443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.631449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.631463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.641388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.177 [2024-12-16 02:58:15.641443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.177 [2024-12-16 02:58:15.641457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.177 [2024-12-16 02:58:15.641464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.641470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.641484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.651398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.177 [2024-12-16 02:58:15.651465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.177 [2024-12-16 02:58:15.651478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.177 [2024-12-16 02:58:15.651486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.651492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.651505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.661427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.177 [2024-12-16 02:58:15.661479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.177 [2024-12-16 02:58:15.661495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.177 [2024-12-16 02:58:15.661502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.661508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.661523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.671427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.177 [2024-12-16 02:58:15.671514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.177 [2024-12-16 02:58:15.671527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.177 [2024-12-16 02:58:15.671534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.671540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.671554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.681480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.177 [2024-12-16 02:58:15.681539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.177 [2024-12-16 02:58:15.681554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.177 [2024-12-16 02:58:15.681561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.681567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.681581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.691548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.177 [2024-12-16 02:58:15.691626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.177 [2024-12-16 02:58:15.691640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.177 [2024-12-16 02:58:15.691647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.691653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.691667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.701457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.177 [2024-12-16 02:58:15.701519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.177 [2024-12-16 02:58:15.701532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.177 [2024-12-16 02:58:15.701539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.701549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.701563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.711544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.177 [2024-12-16 02:58:15.711601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.177 [2024-12-16 02:58:15.711614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.177 [2024-12-16 02:58:15.711620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.711627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.711640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.721579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.177 [2024-12-16 02:58:15.721629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.177 [2024-12-16 02:58:15.721643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.177 [2024-12-16 02:58:15.721650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.721657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.721672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.731619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.177 [2024-12-16 02:58:15.731672] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.177 [2024-12-16 02:58:15.731687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.177 [2024-12-16 02:58:15.731693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.177 [2024-12-16 02:58:15.731699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.177 [2024-12-16 02:58:15.731713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.177 qpair failed and we were unable to recover it. 00:36:45.177 [2024-12-16 02:58:15.741638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.178 [2024-12-16 02:58:15.741692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.178 [2024-12-16 02:58:15.741707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.178 [2024-12-16 02:58:15.741714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.178 [2024-12-16 02:58:15.741720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.178 [2024-12-16 02:58:15.741734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.178 qpair failed and we were unable to recover it. 00:36:45.178 [2024-12-16 02:58:15.751667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.178 [2024-12-16 02:58:15.751721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.178 [2024-12-16 02:58:15.751735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.178 [2024-12-16 02:58:15.751741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.178 [2024-12-16 02:58:15.751747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.178 [2024-12-16 02:58:15.751761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.178 qpair failed and we were unable to recover it. 00:36:45.178 [2024-12-16 02:58:15.761693] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.178 [2024-12-16 02:58:15.761746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.178 [2024-12-16 02:58:15.761760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.178 [2024-12-16 02:58:15.761767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.178 [2024-12-16 02:58:15.761773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.178 [2024-12-16 02:58:15.761787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.178 qpair failed and we were unable to recover it. 00:36:45.178 [2024-12-16 02:58:15.771736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.178 [2024-12-16 02:58:15.771792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.178 [2024-12-16 02:58:15.771806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.178 [2024-12-16 02:58:15.771813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.178 [2024-12-16 02:58:15.771820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.178 [2024-12-16 02:58:15.771834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.178 qpair failed and we were unable to recover it. 00:36:45.178 [2024-12-16 02:58:15.781747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.178 [2024-12-16 02:58:15.781828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.178 [2024-12-16 02:58:15.781842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.178 [2024-12-16 02:58:15.781852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.178 [2024-12-16 02:58:15.781859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.178 [2024-12-16 02:58:15.781873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.178 qpair failed and we were unable to recover it. 00:36:45.178 [2024-12-16 02:58:15.791772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.178 [2024-12-16 02:58:15.791856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.178 [2024-12-16 02:58:15.791873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.178 [2024-12-16 02:58:15.791880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.178 [2024-12-16 02:58:15.791886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.178 [2024-12-16 02:58:15.791900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.178 qpair failed and we were unable to recover it. 00:36:45.178 [2024-12-16 02:58:15.801810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.178 [2024-12-16 02:58:15.801862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.178 [2024-12-16 02:58:15.801876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.178 [2024-12-16 02:58:15.801882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.178 [2024-12-16 02:58:15.801889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.178 [2024-12-16 02:58:15.801903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.178 qpair failed and we were unable to recover it. 00:36:45.178 [2024-12-16 02:58:15.811852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.178 [2024-12-16 02:58:15.811910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.178 [2024-12-16 02:58:15.811923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.178 [2024-12-16 02:58:15.811930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.178 [2024-12-16 02:58:15.811937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.178 [2024-12-16 02:58:15.811951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.178 qpair failed and we were unable to recover it. 00:36:45.178 [2024-12-16 02:58:15.821880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.178 [2024-12-16 02:58:15.821937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.178 [2024-12-16 02:58:15.821951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.178 [2024-12-16 02:58:15.821957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.178 [2024-12-16 02:58:15.821963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.178 [2024-12-16 02:58:15.821978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.178 qpair failed and we were unable to recover it. 00:36:45.178 [2024-12-16 02:58:15.831884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.178 [2024-12-16 02:58:15.831937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.178 [2024-12-16 02:58:15.831951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.178 [2024-12-16 02:58:15.831957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.178 [2024-12-16 02:58:15.831967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.178 [2024-12-16 02:58:15.831982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.178 qpair failed and we were unable to recover it. 00:36:45.439 [2024-12-16 02:58:15.841843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.439 [2024-12-16 02:58:15.841914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.439 [2024-12-16 02:58:15.841929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.439 [2024-12-16 02:58:15.841936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.439 [2024-12-16 02:58:15.841943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.439 [2024-12-16 02:58:15.841957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.439 qpair failed and we were unable to recover it. 00:36:45.439 [2024-12-16 02:58:15.851955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.439 [2024-12-16 02:58:15.852012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.439 [2024-12-16 02:58:15.852025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.439 [2024-12-16 02:58:15.852031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.439 [2024-12-16 02:58:15.852038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.439 [2024-12-16 02:58:15.852051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.439 qpair failed and we were unable to recover it. 00:36:45.439 [2024-12-16 02:58:15.861985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.439 [2024-12-16 02:58:15.862041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.439 [2024-12-16 02:58:15.862054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.439 [2024-12-16 02:58:15.862061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.439 [2024-12-16 02:58:15.862068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.439 [2024-12-16 02:58:15.862083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.439 qpair failed and we were unable to recover it. 00:36:45.439 [2024-12-16 02:58:15.872003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.439 [2024-12-16 02:58:15.872058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.439 [2024-12-16 02:58:15.872072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.439 [2024-12-16 02:58:15.872079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.439 [2024-12-16 02:58:15.872085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.439 [2024-12-16 02:58:15.872099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:15.882041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:15.882096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.440 [2024-12-16 02:58:15.882110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.440 [2024-12-16 02:58:15.882117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.440 [2024-12-16 02:58:15.882123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.440 [2024-12-16 02:58:15.882137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:15.892075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:15.892131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.440 [2024-12-16 02:58:15.892145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.440 [2024-12-16 02:58:15.892152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.440 [2024-12-16 02:58:15.892158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.440 [2024-12-16 02:58:15.892172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:15.902089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:15.902146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.440 [2024-12-16 02:58:15.902159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.440 [2024-12-16 02:58:15.902166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.440 [2024-12-16 02:58:15.902172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.440 [2024-12-16 02:58:15.902187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:15.912122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:15.912207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.440 [2024-12-16 02:58:15.912220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.440 [2024-12-16 02:58:15.912227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.440 [2024-12-16 02:58:15.912233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.440 [2024-12-16 02:58:15.912247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:15.922138] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:15.922203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.440 [2024-12-16 02:58:15.922219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.440 [2024-12-16 02:58:15.922226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.440 [2024-12-16 02:58:15.922232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.440 [2024-12-16 02:58:15.922247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:15.932217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:15.932273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.440 [2024-12-16 02:58:15.932287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.440 [2024-12-16 02:58:15.932293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.440 [2024-12-16 02:58:15.932300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.440 [2024-12-16 02:58:15.932314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:15.942205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:15.942261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.440 [2024-12-16 02:58:15.942274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.440 [2024-12-16 02:58:15.942281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.440 [2024-12-16 02:58:15.942288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.440 [2024-12-16 02:58:15.942302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:15.952231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:15.952285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.440 [2024-12-16 02:58:15.952300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.440 [2024-12-16 02:58:15.952307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.440 [2024-12-16 02:58:15.952314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.440 [2024-12-16 02:58:15.952327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:15.962220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:15.962275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.440 [2024-12-16 02:58:15.962289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.440 [2024-12-16 02:58:15.962296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.440 [2024-12-16 02:58:15.962306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.440 [2024-12-16 02:58:15.962320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:15.972292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:15.972348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.440 [2024-12-16 02:58:15.972361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.440 [2024-12-16 02:58:15.972367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.440 [2024-12-16 02:58:15.972373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.440 [2024-12-16 02:58:15.972387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:15.982374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:15.982427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.440 [2024-12-16 02:58:15.982440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.440 [2024-12-16 02:58:15.982447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.440 [2024-12-16 02:58:15.982453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.440 [2024-12-16 02:58:15.982467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:15.992263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:15.992324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.440 [2024-12-16 02:58:15.992337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.440 [2024-12-16 02:58:15.992344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.440 [2024-12-16 02:58:15.992350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.440 [2024-12-16 02:58:15.992364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:16.002380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:16.002443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.440 [2024-12-16 02:58:16.002457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.440 [2024-12-16 02:58:16.002464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.440 [2024-12-16 02:58:16.002469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.440 [2024-12-16 02:58:16.002484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.440 qpair failed and we were unable to recover it. 00:36:45.440 [2024-12-16 02:58:16.012399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.440 [2024-12-16 02:58:16.012452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.441 [2024-12-16 02:58:16.012465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.441 [2024-12-16 02:58:16.012472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.441 [2024-12-16 02:58:16.012477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.441 [2024-12-16 02:58:16.012492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.441 qpair failed and we were unable to recover it. 00:36:45.441 [2024-12-16 02:58:16.022423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.441 [2024-12-16 02:58:16.022478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.441 [2024-12-16 02:58:16.022491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.441 [2024-12-16 02:58:16.022498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.441 [2024-12-16 02:58:16.022504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.441 [2024-12-16 02:58:16.022519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.441 qpair failed and we were unable to recover it. 00:36:45.441 [2024-12-16 02:58:16.032492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.441 [2024-12-16 02:58:16.032576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.441 [2024-12-16 02:58:16.032590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.441 [2024-12-16 02:58:16.032597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.441 [2024-12-16 02:58:16.032603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.441 [2024-12-16 02:58:16.032618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.441 qpair failed and we were unable to recover it. 00:36:45.441 [2024-12-16 02:58:16.042481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.441 [2024-12-16 02:58:16.042531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.441 [2024-12-16 02:58:16.042544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.441 [2024-12-16 02:58:16.042551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.441 [2024-12-16 02:58:16.042557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.441 [2024-12-16 02:58:16.042572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.441 qpair failed and we were unable to recover it. 00:36:45.441 [2024-12-16 02:58:16.052517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.441 [2024-12-16 02:58:16.052573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.441 [2024-12-16 02:58:16.052590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.441 [2024-12-16 02:58:16.052597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.441 [2024-12-16 02:58:16.052603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.441 [2024-12-16 02:58:16.052617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.441 qpair failed and we were unable to recover it. 00:36:45.441 [2024-12-16 02:58:16.062530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.441 [2024-12-16 02:58:16.062588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.441 [2024-12-16 02:58:16.062602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.441 [2024-12-16 02:58:16.062609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.441 [2024-12-16 02:58:16.062615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.441 [2024-12-16 02:58:16.062630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.441 qpair failed and we were unable to recover it. 00:36:45.441 [2024-12-16 02:58:16.072575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.441 [2024-12-16 02:58:16.072638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.441 [2024-12-16 02:58:16.072652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.441 [2024-12-16 02:58:16.072659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.441 [2024-12-16 02:58:16.072665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.441 [2024-12-16 02:58:16.072679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.441 qpair failed and we were unable to recover it. 00:36:45.441 [2024-12-16 02:58:16.082593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.441 [2024-12-16 02:58:16.082657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.441 [2024-12-16 02:58:16.082671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.441 [2024-12-16 02:58:16.082678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.441 [2024-12-16 02:58:16.082684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.441 [2024-12-16 02:58:16.082699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.441 qpair failed and we were unable to recover it. 00:36:45.441 [2024-12-16 02:58:16.092628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.441 [2024-12-16 02:58:16.092684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.441 [2024-12-16 02:58:16.092698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.441 [2024-12-16 02:58:16.092704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.441 [2024-12-16 02:58:16.092714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.441 [2024-12-16 02:58:16.092728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.441 qpair failed and we were unable to recover it. 00:36:45.702 [2024-12-16 02:58:16.102650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.702 [2024-12-16 02:58:16.102706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.702 [2024-12-16 02:58:16.102720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.702 [2024-12-16 02:58:16.102727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.702 [2024-12-16 02:58:16.102733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.702 [2024-12-16 02:58:16.102748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.702 qpair failed and we were unable to recover it. 00:36:45.702 [2024-12-16 02:58:16.112666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.702 [2024-12-16 02:58:16.112716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.702 [2024-12-16 02:58:16.112730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.702 [2024-12-16 02:58:16.112736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.702 [2024-12-16 02:58:16.112743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.702 [2024-12-16 02:58:16.112757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.702 qpair failed and we were unable to recover it. 00:36:45.702 [2024-12-16 02:58:16.122713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.702 [2024-12-16 02:58:16.122779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.702 [2024-12-16 02:58:16.122794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.702 [2024-12-16 02:58:16.122801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.702 [2024-12-16 02:58:16.122807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.702 [2024-12-16 02:58:16.122822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.702 qpair failed and we were unable to recover it. 00:36:45.702 [2024-12-16 02:58:16.132738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.702 [2024-12-16 02:58:16.132794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.702 [2024-12-16 02:58:16.132808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.702 [2024-12-16 02:58:16.132814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.702 [2024-12-16 02:58:16.132820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.702 [2024-12-16 02:58:16.132834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.702 qpair failed and we were unable to recover it. 00:36:45.702 [2024-12-16 02:58:16.142776] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.702 [2024-12-16 02:58:16.142857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.702 [2024-12-16 02:58:16.142870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.702 [2024-12-16 02:58:16.142878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.702 [2024-12-16 02:58:16.142884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.702 [2024-12-16 02:58:16.142898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.702 qpair failed and we were unable to recover it. 00:36:45.702 [2024-12-16 02:58:16.152710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.702 [2024-12-16 02:58:16.152766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.702 [2024-12-16 02:58:16.152779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.702 [2024-12-16 02:58:16.152785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.702 [2024-12-16 02:58:16.152792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.702 [2024-12-16 02:58:16.152805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.702 qpair failed and we were unable to recover it. 00:36:45.702 [2024-12-16 02:58:16.162856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.702 [2024-12-16 02:58:16.162938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.702 [2024-12-16 02:58:16.162952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.702 [2024-12-16 02:58:16.162958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.702 [2024-12-16 02:58:16.162964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.702 [2024-12-16 02:58:16.162979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.702 qpair failed and we were unable to recover it. 00:36:45.702 [2024-12-16 02:58:16.172868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.702 [2024-12-16 02:58:16.172972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.702 [2024-12-16 02:58:16.172986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.702 [2024-12-16 02:58:16.172993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.702 [2024-12-16 02:58:16.172999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.702 [2024-12-16 02:58:16.173013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.702 qpair failed and we were unable to recover it. 00:36:45.702 [2024-12-16 02:58:16.182879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.702 [2024-12-16 02:58:16.182938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.702 [2024-12-16 02:58:16.182954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.702 [2024-12-16 02:58:16.182962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.702 [2024-12-16 02:58:16.182969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.702 [2024-12-16 02:58:16.182983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.702 qpair failed and we were unable to recover it. 00:36:45.702 [2024-12-16 02:58:16.192921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.702 [2024-12-16 02:58:16.193005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.702 [2024-12-16 02:58:16.193019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.702 [2024-12-16 02:58:16.193026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.702 [2024-12-16 02:58:16.193033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.193047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.203006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.203060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.703 [2024-12-16 02:58:16.203073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.703 [2024-12-16 02:58:16.203080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.703 [2024-12-16 02:58:16.203086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.203101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.212978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.213035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.703 [2024-12-16 02:58:16.213049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.703 [2024-12-16 02:58:16.213056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.703 [2024-12-16 02:58:16.213062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.213075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.223034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.223124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.703 [2024-12-16 02:58:16.223141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.703 [2024-12-16 02:58:16.223147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.703 [2024-12-16 02:58:16.223158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.223173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.233023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.233077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.703 [2024-12-16 02:58:16.233091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.703 [2024-12-16 02:58:16.233098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.703 [2024-12-16 02:58:16.233103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.233117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.243097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.243160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.703 [2024-12-16 02:58:16.243173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.703 [2024-12-16 02:58:16.243181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.703 [2024-12-16 02:58:16.243187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.243202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.253053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.253109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.703 [2024-12-16 02:58:16.253123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.703 [2024-12-16 02:58:16.253130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.703 [2024-12-16 02:58:16.253136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.253151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.263074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.263128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.703 [2024-12-16 02:58:16.263141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.703 [2024-12-16 02:58:16.263148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.703 [2024-12-16 02:58:16.263154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.263169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.273148] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.273232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.703 [2024-12-16 02:58:16.273246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.703 [2024-12-16 02:58:16.273252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.703 [2024-12-16 02:58:16.273258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.273272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.283230] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.283306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.703 [2024-12-16 02:58:16.283321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.703 [2024-12-16 02:58:16.283328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.703 [2024-12-16 02:58:16.283334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.283348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.293165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.293224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.703 [2024-12-16 02:58:16.293237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.703 [2024-12-16 02:58:16.293243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.703 [2024-12-16 02:58:16.293250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.293264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.303209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.303286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.703 [2024-12-16 02:58:16.303299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.703 [2024-12-16 02:58:16.303306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.703 [2024-12-16 02:58:16.303312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.303327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.313231] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.313284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.703 [2024-12-16 02:58:16.313301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.703 [2024-12-16 02:58:16.313308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.703 [2024-12-16 02:58:16.313315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.313328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.323303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.323365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.703 [2024-12-16 02:58:16.323380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.703 [2024-12-16 02:58:16.323387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.703 [2024-12-16 02:58:16.323393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.703 [2024-12-16 02:58:16.323408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.703 qpair failed and we were unable to recover it. 00:36:45.703 [2024-12-16 02:58:16.333276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.703 [2024-12-16 02:58:16.333334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.704 [2024-12-16 02:58:16.333347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.704 [2024-12-16 02:58:16.333353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.704 [2024-12-16 02:58:16.333360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.704 [2024-12-16 02:58:16.333374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.704 qpair failed and we were unable to recover it. 00:36:45.704 [2024-12-16 02:58:16.343288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.704 [2024-12-16 02:58:16.343341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.704 [2024-12-16 02:58:16.343355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.704 [2024-12-16 02:58:16.343361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.704 [2024-12-16 02:58:16.343367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.704 [2024-12-16 02:58:16.343382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.704 qpair failed and we were unable to recover it. 00:36:45.704 [2024-12-16 02:58:16.353311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.704 [2024-12-16 02:58:16.353366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.704 [2024-12-16 02:58:16.353380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.704 [2024-12-16 02:58:16.353387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.704 [2024-12-16 02:58:16.353397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.704 [2024-12-16 02:58:16.353410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.704 qpair failed and we were unable to recover it. 00:36:45.964 [2024-12-16 02:58:16.363352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.964 [2024-12-16 02:58:16.363409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.964 [2024-12-16 02:58:16.363423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.964 [2024-12-16 02:58:16.363430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.964 [2024-12-16 02:58:16.363437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.964 [2024-12-16 02:58:16.363451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.964 qpair failed and we were unable to recover it. 00:36:45.964 [2024-12-16 02:58:16.373462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.964 [2024-12-16 02:58:16.373515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.964 [2024-12-16 02:58:16.373529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.964 [2024-12-16 02:58:16.373535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.964 [2024-12-16 02:58:16.373541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.964 [2024-12-16 02:58:16.373555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.964 qpair failed and we were unable to recover it. 00:36:45.964 [2024-12-16 02:58:16.383408] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.964 [2024-12-16 02:58:16.383499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.964 [2024-12-16 02:58:16.383513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.964 [2024-12-16 02:58:16.383520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.964 [2024-12-16 02:58:16.383526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.964 [2024-12-16 02:58:16.383540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.964 qpair failed and we were unable to recover it. 00:36:45.964 [2024-12-16 02:58:16.393512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.964 [2024-12-16 02:58:16.393568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.964 [2024-12-16 02:58:16.393581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.964 [2024-12-16 02:58:16.393588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.964 [2024-12-16 02:58:16.393594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.964 [2024-12-16 02:58:16.393608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.964 qpair failed and we were unable to recover it. 00:36:45.964 [2024-12-16 02:58:16.403489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.964 [2024-12-16 02:58:16.403545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.964 [2024-12-16 02:58:16.403558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.964 [2024-12-16 02:58:16.403564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.964 [2024-12-16 02:58:16.403571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.964 [2024-12-16 02:58:16.403585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.964 qpair failed and we were unable to recover it. 00:36:45.964 [2024-12-16 02:58:16.413492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.964 [2024-12-16 02:58:16.413574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.964 [2024-12-16 02:58:16.413588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.964 [2024-12-16 02:58:16.413594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.964 [2024-12-16 02:58:16.413600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.964 [2024-12-16 02:58:16.413614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.964 qpair failed and we were unable to recover it. 00:36:45.964 [2024-12-16 02:58:16.423529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.964 [2024-12-16 02:58:16.423584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.964 [2024-12-16 02:58:16.423599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.964 [2024-12-16 02:58:16.423606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.964 [2024-12-16 02:58:16.423612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.964 [2024-12-16 02:58:16.423626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.433625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.433683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.965 [2024-12-16 02:58:16.433697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.965 [2024-12-16 02:58:16.433704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.965 [2024-12-16 02:58:16.433711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.965 [2024-12-16 02:58:16.433725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.443631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.443687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.965 [2024-12-16 02:58:16.443705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.965 [2024-12-16 02:58:16.443712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.965 [2024-12-16 02:58:16.443718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.965 [2024-12-16 02:58:16.443733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.453667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.453733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.965 [2024-12-16 02:58:16.453747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.965 [2024-12-16 02:58:16.453753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.965 [2024-12-16 02:58:16.453760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.965 [2024-12-16 02:58:16.453775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.463755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.463815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.965 [2024-12-16 02:58:16.463829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.965 [2024-12-16 02:58:16.463835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.965 [2024-12-16 02:58:16.463842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.965 [2024-12-16 02:58:16.463860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.473753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.473808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.965 [2024-12-16 02:58:16.473822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.965 [2024-12-16 02:58:16.473828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.965 [2024-12-16 02:58:16.473835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.965 [2024-12-16 02:58:16.473855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.483781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.483831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.965 [2024-12-16 02:58:16.483845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.965 [2024-12-16 02:58:16.483856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.965 [2024-12-16 02:58:16.483866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.965 [2024-12-16 02:58:16.483881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.493757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.493837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.965 [2024-12-16 02:58:16.493856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.965 [2024-12-16 02:58:16.493863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.965 [2024-12-16 02:58:16.493869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.965 [2024-12-16 02:58:16.493883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.503829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.503892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.965 [2024-12-16 02:58:16.503907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.965 [2024-12-16 02:58:16.503914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.965 [2024-12-16 02:58:16.503920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.965 [2024-12-16 02:58:16.503935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.513870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.513930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.965 [2024-12-16 02:58:16.513945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.965 [2024-12-16 02:58:16.513951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.965 [2024-12-16 02:58:16.513958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.965 [2024-12-16 02:58:16.513973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.523837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.523919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.965 [2024-12-16 02:58:16.523936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.965 [2024-12-16 02:58:16.523943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.965 [2024-12-16 02:58:16.523949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.965 [2024-12-16 02:58:16.523964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.533959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.534065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.965 [2024-12-16 02:58:16.534079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.965 [2024-12-16 02:58:16.534085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.965 [2024-12-16 02:58:16.534092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.965 [2024-12-16 02:58:16.534106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.543879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.543934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.965 [2024-12-16 02:58:16.543948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.965 [2024-12-16 02:58:16.543955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.965 [2024-12-16 02:58:16.543961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.965 [2024-12-16 02:58:16.543976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.553956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.554012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.965 [2024-12-16 02:58:16.554025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.965 [2024-12-16 02:58:16.554032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.965 [2024-12-16 02:58:16.554038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.965 [2024-12-16 02:58:16.554053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.965 qpair failed and we were unable to recover it. 00:36:45.965 [2024-12-16 02:58:16.563995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.965 [2024-12-16 02:58:16.564047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.966 [2024-12-16 02:58:16.564060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.966 [2024-12-16 02:58:16.564067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.966 [2024-12-16 02:58:16.564073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.966 [2024-12-16 02:58:16.564087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.966 qpair failed and we were unable to recover it. 00:36:45.966 [2024-12-16 02:58:16.573998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.966 [2024-12-16 02:58:16.574076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.966 [2024-12-16 02:58:16.574093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.966 [2024-12-16 02:58:16.574100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.966 [2024-12-16 02:58:16.574106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.966 [2024-12-16 02:58:16.574120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.966 qpair failed and we were unable to recover it. 00:36:45.966 [2024-12-16 02:58:16.584054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.966 [2024-12-16 02:58:16.584106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.966 [2024-12-16 02:58:16.584120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.966 [2024-12-16 02:58:16.584126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.966 [2024-12-16 02:58:16.584132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.966 [2024-12-16 02:58:16.584147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.966 qpair failed and we were unable to recover it. 00:36:45.966 [2024-12-16 02:58:16.594119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.966 [2024-12-16 02:58:16.594196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.966 [2024-12-16 02:58:16.594210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.966 [2024-12-16 02:58:16.594217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.966 [2024-12-16 02:58:16.594223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.966 [2024-12-16 02:58:16.594238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.966 qpair failed and we were unable to recover it. 00:36:45.966 [2024-12-16 02:58:16.604119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.966 [2024-12-16 02:58:16.604178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.966 [2024-12-16 02:58:16.604192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.966 [2024-12-16 02:58:16.604198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.966 [2024-12-16 02:58:16.604204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.966 [2024-12-16 02:58:16.604219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.966 qpair failed and we were unable to recover it. 00:36:45.966 [2024-12-16 02:58:16.614069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:45.966 [2024-12-16 02:58:16.614123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:45.966 [2024-12-16 02:58:16.614136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:45.966 [2024-12-16 02:58:16.614143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:45.966 [2024-12-16 02:58:16.614153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:45.966 [2024-12-16 02:58:16.614168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:45.966 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.624150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.624254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.624267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.624275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.624280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.624295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.634178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.634232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.634246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.634253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.634259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.634274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.644206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.644259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.644272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.644279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.644285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.644299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.654315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.654413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.654427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.654433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.654439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.654453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.664272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.664323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.664336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.664342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.664348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.664362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.674287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.674341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.674354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.674360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.674367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.674381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.684341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.684398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.684411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.684418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.684425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.684439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.694443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.694517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.694531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.694537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.694543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.694556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.704379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.704444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.704461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.704468] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.704474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.704487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.714405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.714475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.714489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.714495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.714502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.714515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.724455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.724509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.724523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.724530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.724536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.724551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.734475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.734530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.734544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.734551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.734557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.734571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.744540] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.744599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.744613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.744621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.744630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.744645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.754522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.754574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.754587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.754594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.754600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.754615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.764558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.764608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.764622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.764629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.764636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.764649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.774633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.774727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.774740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.774747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.774753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.774767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.784615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.784674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.784688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.784695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.784701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.784715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.794642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.794698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.794711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.226 [2024-12-16 02:58:16.794719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.226 [2024-12-16 02:58:16.794726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.226 [2024-12-16 02:58:16.794740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.226 qpair failed and we were unable to recover it. 00:36:46.226 [2024-12-16 02:58:16.804668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.226 [2024-12-16 02:58:16.804771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.226 [2024-12-16 02:58:16.804785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.227 [2024-12-16 02:58:16.804792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.227 [2024-12-16 02:58:16.804798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.227 [2024-12-16 02:58:16.804812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.227 qpair failed and we were unable to recover it. 00:36:46.227 [2024-12-16 02:58:16.814667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.227 [2024-12-16 02:58:16.814722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.227 [2024-12-16 02:58:16.814735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.227 [2024-12-16 02:58:16.814741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.227 [2024-12-16 02:58:16.814747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.227 [2024-12-16 02:58:16.814762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.227 qpair failed and we were unable to recover it. 00:36:46.227 [2024-12-16 02:58:16.824739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.227 [2024-12-16 02:58:16.824793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.227 [2024-12-16 02:58:16.824807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.227 [2024-12-16 02:58:16.824813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.227 [2024-12-16 02:58:16.824819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.227 [2024-12-16 02:58:16.824833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.227 qpair failed and we were unable to recover it. 00:36:46.227 [2024-12-16 02:58:16.834801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.227 [2024-12-16 02:58:16.834864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.227 [2024-12-16 02:58:16.834882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.227 [2024-12-16 02:58:16.834890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.227 [2024-12-16 02:58:16.834896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.227 [2024-12-16 02:58:16.834910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.227 qpair failed and we were unable to recover it. 00:36:46.227 [2024-12-16 02:58:16.844796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.227 [2024-12-16 02:58:16.844855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.227 [2024-12-16 02:58:16.844870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.227 [2024-12-16 02:58:16.844877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.227 [2024-12-16 02:58:16.844883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.227 [2024-12-16 02:58:16.844897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.227 qpair failed and we were unable to recover it. 00:36:46.227 [2024-12-16 02:58:16.854844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.227 [2024-12-16 02:58:16.854904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.227 [2024-12-16 02:58:16.854918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.227 [2024-12-16 02:58:16.854924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.227 [2024-12-16 02:58:16.854931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.227 [2024-12-16 02:58:16.854945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.227 qpair failed and we were unable to recover it. 00:36:46.227 [2024-12-16 02:58:16.864881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.227 [2024-12-16 02:58:16.864935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.227 [2024-12-16 02:58:16.864948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.227 [2024-12-16 02:58:16.864955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.227 [2024-12-16 02:58:16.864962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.227 [2024-12-16 02:58:16.864976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.227 qpair failed and we were unable to recover it. 00:36:46.227 [2024-12-16 02:58:16.874912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.227 [2024-12-16 02:58:16.874967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.227 [2024-12-16 02:58:16.874980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.227 [2024-12-16 02:58:16.874987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.227 [2024-12-16 02:58:16.874997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.227 [2024-12-16 02:58:16.875012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.227 qpair failed and we were unable to recover it. 00:36:46.487 [2024-12-16 02:58:16.884852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.487 [2024-12-16 02:58:16.884912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.487 [2024-12-16 02:58:16.884926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:16.884934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:16.884940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.488 [2024-12-16 02:58:16.884955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.488 qpair failed and we were unable to recover it. 00:36:46.488 [2024-12-16 02:58:16.894985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.488 [2024-12-16 02:58:16.895038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.488 [2024-12-16 02:58:16.895052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:16.895059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:16.895065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.488 [2024-12-16 02:58:16.895080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.488 qpair failed and we were unable to recover it. 00:36:46.488 [2024-12-16 02:58:16.904970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.488 [2024-12-16 02:58:16.905026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.488 [2024-12-16 02:58:16.905039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:16.905046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:16.905052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.488 [2024-12-16 02:58:16.905066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.488 qpair failed and we were unable to recover it. 00:36:46.488 [2024-12-16 02:58:16.915050] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.488 [2024-12-16 02:58:16.915103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.488 [2024-12-16 02:58:16.915117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:16.915123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:16.915130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.488 [2024-12-16 02:58:16.915144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.488 qpair failed and we were unable to recover it. 00:36:46.488 [2024-12-16 02:58:16.925042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.488 [2024-12-16 02:58:16.925098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.488 [2024-12-16 02:58:16.925113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:16.925120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:16.925126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.488 [2024-12-16 02:58:16.925141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.488 qpair failed and we were unable to recover it. 00:36:46.488 [2024-12-16 02:58:16.935116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.488 [2024-12-16 02:58:16.935176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.488 [2024-12-16 02:58:16.935189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:16.935197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:16.935203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.488 [2024-12-16 02:58:16.935218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.488 qpair failed and we were unable to recover it. 00:36:46.488 [2024-12-16 02:58:16.945003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.488 [2024-12-16 02:58:16.945089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.488 [2024-12-16 02:58:16.945102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:16.945109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:16.945116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.488 [2024-12-16 02:58:16.945131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.488 qpair failed and we were unable to recover it. 00:36:46.488 [2024-12-16 02:58:16.955119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.488 [2024-12-16 02:58:16.955196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.488 [2024-12-16 02:58:16.955210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:16.955217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:16.955223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.488 [2024-12-16 02:58:16.955238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.488 qpair failed and we were unable to recover it. 00:36:46.488 [2024-12-16 02:58:16.965163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.488 [2024-12-16 02:58:16.965228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.488 [2024-12-16 02:58:16.965245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:16.965253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:16.965258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.488 [2024-12-16 02:58:16.965273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.488 qpair failed and we were unable to recover it. 00:36:46.488 [2024-12-16 02:58:16.975178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.488 [2024-12-16 02:58:16.975252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.488 [2024-12-16 02:58:16.975265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:16.975272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:16.975278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.488 [2024-12-16 02:58:16.975292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.488 qpair failed and we were unable to recover it. 00:36:46.488 [2024-12-16 02:58:16.985205] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.488 [2024-12-16 02:58:16.985276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.488 [2024-12-16 02:58:16.985290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:16.985297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:16.985303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.488 [2024-12-16 02:58:16.985318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.488 qpair failed and we were unable to recover it. 00:36:46.488 [2024-12-16 02:58:16.995219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.488 [2024-12-16 02:58:16.995276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.488 [2024-12-16 02:58:16.995289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:16.995297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:16.995303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.488 [2024-12-16 02:58:16.995317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.488 qpair failed and we were unable to recover it. 00:36:46.488 [2024-12-16 02:58:17.005265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.488 [2024-12-16 02:58:17.005330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.488 [2024-12-16 02:58:17.005343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:17.005350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:17.005359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.488 [2024-12-16 02:58:17.005373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.488 qpair failed and we were unable to recover it. 00:36:46.488 [2024-12-16 02:58:17.015306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.488 [2024-12-16 02:58:17.015367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.488 [2024-12-16 02:58:17.015380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.488 [2024-12-16 02:58:17.015387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.488 [2024-12-16 02:58:17.015393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.489 [2024-12-16 02:58:17.015407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.489 qpair failed and we were unable to recover it. 00:36:46.489 [2024-12-16 02:58:17.025316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.489 [2024-12-16 02:58:17.025397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.489 [2024-12-16 02:58:17.025411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.489 [2024-12-16 02:58:17.025418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.489 [2024-12-16 02:58:17.025424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.489 [2024-12-16 02:58:17.025439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.489 qpair failed and we were unable to recover it. 00:36:46.489 [2024-12-16 02:58:17.035334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.489 [2024-12-16 02:58:17.035392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.489 [2024-12-16 02:58:17.035406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.489 [2024-12-16 02:58:17.035414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.489 [2024-12-16 02:58:17.035420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.489 [2024-12-16 02:58:17.035434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.489 qpair failed and we were unable to recover it. 00:36:46.489 [2024-12-16 02:58:17.045368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.489 [2024-12-16 02:58:17.045421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.489 [2024-12-16 02:58:17.045435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.489 [2024-12-16 02:58:17.045442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.489 [2024-12-16 02:58:17.045449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.489 [2024-12-16 02:58:17.045463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.489 qpair failed and we were unable to recover it. 00:36:46.489 [2024-12-16 02:58:17.055405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.489 [2024-12-16 02:58:17.055458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.489 [2024-12-16 02:58:17.055473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.489 [2024-12-16 02:58:17.055479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.489 [2024-12-16 02:58:17.055485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.489 [2024-12-16 02:58:17.055500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.489 qpair failed and we were unable to recover it. 00:36:46.489 [2024-12-16 02:58:17.065422] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.489 [2024-12-16 02:58:17.065472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.489 [2024-12-16 02:58:17.065486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.489 [2024-12-16 02:58:17.065493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.489 [2024-12-16 02:58:17.065499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.489 [2024-12-16 02:58:17.065513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.489 qpair failed and we were unable to recover it. 00:36:46.489 [2024-12-16 02:58:17.075452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.489 [2024-12-16 02:58:17.075507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.489 [2024-12-16 02:58:17.075520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.489 [2024-12-16 02:58:17.075527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.489 [2024-12-16 02:58:17.075533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.489 [2024-12-16 02:58:17.075547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.489 qpair failed and we were unable to recover it. 00:36:46.489 [2024-12-16 02:58:17.085480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.489 [2024-12-16 02:58:17.085580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.489 [2024-12-16 02:58:17.085594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.489 [2024-12-16 02:58:17.085601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.489 [2024-12-16 02:58:17.085606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.489 [2024-12-16 02:58:17.085620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.489 qpair failed and we were unable to recover it. 00:36:46.489 [2024-12-16 02:58:17.095506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.489 [2024-12-16 02:58:17.095582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.489 [2024-12-16 02:58:17.095599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.489 [2024-12-16 02:58:17.095607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.489 [2024-12-16 02:58:17.095613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.489 [2024-12-16 02:58:17.095628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.489 qpair failed and we were unable to recover it. 00:36:46.489 [2024-12-16 02:58:17.105547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.489 [2024-12-16 02:58:17.105600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.489 [2024-12-16 02:58:17.105614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.489 [2024-12-16 02:58:17.105621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.489 [2024-12-16 02:58:17.105627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.489 [2024-12-16 02:58:17.105641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.489 qpair failed and we were unable to recover it. 00:36:46.489 [2024-12-16 02:58:17.115576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.489 [2024-12-16 02:58:17.115628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.489 [2024-12-16 02:58:17.115642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.489 [2024-12-16 02:58:17.115649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.489 [2024-12-16 02:58:17.115655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.489 [2024-12-16 02:58:17.115669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.489 qpair failed and we were unable to recover it. 00:36:46.489 [2024-12-16 02:58:17.125609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.489 [2024-12-16 02:58:17.125667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.489 [2024-12-16 02:58:17.125681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.489 [2024-12-16 02:58:17.125689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.489 [2024-12-16 02:58:17.125695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.489 [2024-12-16 02:58:17.125709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.489 qpair failed and we were unable to recover it. 00:36:46.489 [2024-12-16 02:58:17.135635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.489 [2024-12-16 02:58:17.135697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.489 [2024-12-16 02:58:17.135711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.489 [2024-12-16 02:58:17.135719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.489 [2024-12-16 02:58:17.135728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.489 [2024-12-16 02:58:17.135743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.489 qpair failed and we were unable to recover it. 00:36:46.489 [2024-12-16 02:58:17.145598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.489 [2024-12-16 02:58:17.145652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.489 [2024-12-16 02:58:17.145666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.750 [2024-12-16 02:58:17.145674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.750 [2024-12-16 02:58:17.145681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.750 [2024-12-16 02:58:17.145696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.750 qpair failed and we were unable to recover it. 00:36:46.750 [2024-12-16 02:58:17.155675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.750 [2024-12-16 02:58:17.155730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.750 [2024-12-16 02:58:17.155744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.750 [2024-12-16 02:58:17.155750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.750 [2024-12-16 02:58:17.155757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.750 [2024-12-16 02:58:17.155770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.750 qpair failed and we were unable to recover it. 00:36:46.750 [2024-12-16 02:58:17.165741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.750 [2024-12-16 02:58:17.165798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.750 [2024-12-16 02:58:17.165811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.750 [2024-12-16 02:58:17.165819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.750 [2024-12-16 02:58:17.165825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.750 [2024-12-16 02:58:17.165839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.750 qpair failed and we were unable to recover it. 00:36:46.750 [2024-12-16 02:58:17.175755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.750 [2024-12-16 02:58:17.175858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.750 [2024-12-16 02:58:17.175872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.750 [2024-12-16 02:58:17.175879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.750 [2024-12-16 02:58:17.175885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.750 [2024-12-16 02:58:17.175899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.750 qpair failed and we were unable to recover it. 00:36:46.750 [2024-12-16 02:58:17.185780] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.750 [2024-12-16 02:58:17.185839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.750 [2024-12-16 02:58:17.185857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.750 [2024-12-16 02:58:17.185864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.750 [2024-12-16 02:58:17.185870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.750 [2024-12-16 02:58:17.185885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.750 qpair failed and we were unable to recover it. 00:36:46.750 [2024-12-16 02:58:17.195794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.750 [2024-12-16 02:58:17.195891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.750 [2024-12-16 02:58:17.195904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.750 [2024-12-16 02:58:17.195911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.750 [2024-12-16 02:58:17.195917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.750 [2024-12-16 02:58:17.195932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.750 qpair failed and we were unable to recover it. 00:36:46.750 [2024-12-16 02:58:17.205813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.750 [2024-12-16 02:58:17.205868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.750 [2024-12-16 02:58:17.205882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.750 [2024-12-16 02:58:17.205890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.750 [2024-12-16 02:58:17.205896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.750 [2024-12-16 02:58:17.205910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.750 qpair failed and we were unable to recover it. 00:36:46.750 [2024-12-16 02:58:17.215867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.750 [2024-12-16 02:58:17.215945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.750 [2024-12-16 02:58:17.215959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.750 [2024-12-16 02:58:17.215966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.750 [2024-12-16 02:58:17.215972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.750 [2024-12-16 02:58:17.215987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.750 qpair failed and we were unable to recover it. 00:36:46.750 [2024-12-16 02:58:17.225945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.750 [2024-12-16 02:58:17.226048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.750 [2024-12-16 02:58:17.226066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.750 [2024-12-16 02:58:17.226073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.750 [2024-12-16 02:58:17.226079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.750 [2024-12-16 02:58:17.226094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.750 qpair failed and we were unable to recover it. 00:36:46.750 [2024-12-16 02:58:17.235833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.750 [2024-12-16 02:58:17.235893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.750 [2024-12-16 02:58:17.235907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.750 [2024-12-16 02:58:17.235914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.750 [2024-12-16 02:58:17.235920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.750 [2024-12-16 02:58:17.235934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.750 qpair failed and we were unable to recover it. 00:36:46.750 [2024-12-16 02:58:17.245941] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.245998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.246012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.751 [2024-12-16 02:58:17.246019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.751 [2024-12-16 02:58:17.246026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.751 [2024-12-16 02:58:17.246040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.751 qpair failed and we were unable to recover it. 00:36:46.751 [2024-12-16 02:58:17.255980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.256038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.256052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.751 [2024-12-16 02:58:17.256058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.751 [2024-12-16 02:58:17.256065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.751 [2024-12-16 02:58:17.256079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.751 qpair failed and we were unable to recover it. 00:36:46.751 [2024-12-16 02:58:17.266003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.266062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.266075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.751 [2024-12-16 02:58:17.266083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.751 [2024-12-16 02:58:17.266093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.751 [2024-12-16 02:58:17.266107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.751 qpair failed and we were unable to recover it. 00:36:46.751 [2024-12-16 02:58:17.276077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.276139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.276152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.751 [2024-12-16 02:58:17.276159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.751 [2024-12-16 02:58:17.276166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.751 [2024-12-16 02:58:17.276180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.751 qpair failed and we were unable to recover it. 00:36:46.751 [2024-12-16 02:58:17.286052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.286108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.286122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.751 [2024-12-16 02:58:17.286130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.751 [2024-12-16 02:58:17.286136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.751 [2024-12-16 02:58:17.286151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.751 qpair failed and we were unable to recover it. 00:36:46.751 [2024-12-16 02:58:17.296093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.296148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.296161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.751 [2024-12-16 02:58:17.296168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.751 [2024-12-16 02:58:17.296174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.751 [2024-12-16 02:58:17.296188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.751 qpair failed and we were unable to recover it. 00:36:46.751 [2024-12-16 02:58:17.306165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.306225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.306240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.751 [2024-12-16 02:58:17.306246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.751 [2024-12-16 02:58:17.306253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.751 [2024-12-16 02:58:17.306267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.751 qpair failed and we were unable to recover it. 00:36:46.751 [2024-12-16 02:58:17.316182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.316247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.316260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.751 [2024-12-16 02:58:17.316267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.751 [2024-12-16 02:58:17.316273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.751 [2024-12-16 02:58:17.316287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.751 qpair failed and we were unable to recover it. 00:36:46.751 [2024-12-16 02:58:17.326213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.326283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.326297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.751 [2024-12-16 02:58:17.326304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.751 [2024-12-16 02:58:17.326310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.751 [2024-12-16 02:58:17.326325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.751 qpair failed and we were unable to recover it. 00:36:46.751 [2024-12-16 02:58:17.336202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.336256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.336269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.751 [2024-12-16 02:58:17.336277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.751 [2024-12-16 02:58:17.336284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.751 [2024-12-16 02:58:17.336299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.751 qpair failed and we were unable to recover it. 00:36:46.751 [2024-12-16 02:58:17.346269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.346333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.346346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.751 [2024-12-16 02:58:17.346353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.751 [2024-12-16 02:58:17.346359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.751 [2024-12-16 02:58:17.346373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.751 qpair failed and we were unable to recover it. 00:36:46.751 [2024-12-16 02:58:17.356278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.356364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.356380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.751 [2024-12-16 02:58:17.356387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.751 [2024-12-16 02:58:17.356393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.751 [2024-12-16 02:58:17.356407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.751 qpair failed and we were unable to recover it. 00:36:46.751 [2024-12-16 02:58:17.366293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.366348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.366361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.751 [2024-12-16 02:58:17.366368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.751 [2024-12-16 02:58:17.366375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.751 [2024-12-16 02:58:17.366389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.751 qpair failed and we were unable to recover it. 00:36:46.751 [2024-12-16 02:58:17.376329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.751 [2024-12-16 02:58:17.376384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.751 [2024-12-16 02:58:17.376397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.752 [2024-12-16 02:58:17.376404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.752 [2024-12-16 02:58:17.376410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.752 [2024-12-16 02:58:17.376425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.752 qpair failed and we were unable to recover it. 00:36:46.752 [2024-12-16 02:58:17.386344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.752 [2024-12-16 02:58:17.386399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.752 [2024-12-16 02:58:17.386413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.752 [2024-12-16 02:58:17.386419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.752 [2024-12-16 02:58:17.386426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.752 [2024-12-16 02:58:17.386440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.752 qpair failed and we were unable to recover it. 00:36:46.752 [2024-12-16 02:58:17.396367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.752 [2024-12-16 02:58:17.396422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.752 [2024-12-16 02:58:17.396435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.752 [2024-12-16 02:58:17.396442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.752 [2024-12-16 02:58:17.396452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.752 [2024-12-16 02:58:17.396466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.752 qpair failed and we were unable to recover it. 00:36:46.752 [2024-12-16 02:58:17.406323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:46.752 [2024-12-16 02:58:17.406379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:46.752 [2024-12-16 02:58:17.406393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:46.752 [2024-12-16 02:58:17.406400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:46.752 [2024-12-16 02:58:17.406406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:46.752 [2024-12-16 02:58:17.406420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:46.752 qpair failed and we were unable to recover it. 00:36:47.012 [2024-12-16 02:58:17.416457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.012 [2024-12-16 02:58:17.416516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.012 [2024-12-16 02:58:17.416530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.012 [2024-12-16 02:58:17.416537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.012 [2024-12-16 02:58:17.416544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.012 [2024-12-16 02:58:17.416558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.012 qpair failed and we were unable to recover it. 00:36:47.012 [2024-12-16 02:58:17.426473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.012 [2024-12-16 02:58:17.426550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.012 [2024-12-16 02:58:17.426564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.012 [2024-12-16 02:58:17.426572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.012 [2024-12-16 02:58:17.426578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.012 [2024-12-16 02:58:17.426592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.012 qpair failed and we were unable to recover it. 00:36:47.012 [2024-12-16 02:58:17.436521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.012 [2024-12-16 02:58:17.436576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.012 [2024-12-16 02:58:17.436590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.012 [2024-12-16 02:58:17.436597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.012 [2024-12-16 02:58:17.436603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.012 [2024-12-16 02:58:17.436617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.012 qpair failed and we were unable to recover it. 00:36:47.012 [2024-12-16 02:58:17.446508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.012 [2024-12-16 02:58:17.446566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.012 [2024-12-16 02:58:17.446580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.012 [2024-12-16 02:58:17.446588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.012 [2024-12-16 02:58:17.446594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.012 [2024-12-16 02:58:17.446608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.012 qpair failed and we were unable to recover it. 00:36:47.012 [2024-12-16 02:58:17.456578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.012 [2024-12-16 02:58:17.456680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.012 [2024-12-16 02:58:17.456694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.012 [2024-12-16 02:58:17.456701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.012 [2024-12-16 02:58:17.456708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.012 [2024-12-16 02:58:17.456722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.012 qpair failed and we were unable to recover it. 00:36:47.012 [2024-12-16 02:58:17.466599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.012 [2024-12-16 02:58:17.466682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.012 [2024-12-16 02:58:17.466697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.012 [2024-12-16 02:58:17.466704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.012 [2024-12-16 02:58:17.466710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.012 [2024-12-16 02:58:17.466725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.012 qpair failed and we were unable to recover it. 00:36:47.012 [2024-12-16 02:58:17.476655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.012 [2024-12-16 02:58:17.476760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.012 [2024-12-16 02:58:17.476775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.012 [2024-12-16 02:58:17.476781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.012 [2024-12-16 02:58:17.476787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.012 [2024-12-16 02:58:17.476802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.012 qpair failed and we were unable to recover it. 00:36:47.012 [2024-12-16 02:58:17.486674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.012 [2024-12-16 02:58:17.486724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.012 [2024-12-16 02:58:17.486741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.012 [2024-12-16 02:58:17.486749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.012 [2024-12-16 02:58:17.486756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.012 [2024-12-16 02:58:17.486771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.012 qpair failed and we were unable to recover it. 00:36:47.012 [2024-12-16 02:58:17.496664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.012 [2024-12-16 02:58:17.496725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.012 [2024-12-16 02:58:17.496739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.012 [2024-12-16 02:58:17.496746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.012 [2024-12-16 02:58:17.496752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.012 [2024-12-16 02:58:17.496767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.012 qpair failed and we were unable to recover it. 00:36:47.012 [2024-12-16 02:58:17.506678] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.012 [2024-12-16 02:58:17.506762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.012 [2024-12-16 02:58:17.506775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.012 [2024-12-16 02:58:17.506782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.012 [2024-12-16 02:58:17.506789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.012 [2024-12-16 02:58:17.506803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.012 qpair failed and we were unable to recover it. 00:36:47.012 [2024-12-16 02:58:17.516706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.516758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.516775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.516782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.013 [2024-12-16 02:58:17.516788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.013 [2024-12-16 02:58:17.516804] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.013 qpair failed and we were unable to recover it. 00:36:47.013 [2024-12-16 02:58:17.526731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.526783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.526798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.526804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.013 [2024-12-16 02:58:17.526814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.013 [2024-12-16 02:58:17.526829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.013 qpair failed and we were unable to recover it. 00:36:47.013 [2024-12-16 02:58:17.536781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.536885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.536900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.536906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.013 [2024-12-16 02:58:17.536913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.013 [2024-12-16 02:58:17.536927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.013 qpair failed and we were unable to recover it. 00:36:47.013 [2024-12-16 02:58:17.546798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.546853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.546867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.546874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.013 [2024-12-16 02:58:17.546880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.013 [2024-12-16 02:58:17.546895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.013 qpair failed and we were unable to recover it. 00:36:47.013 [2024-12-16 02:58:17.556771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.556823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.556837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.556844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.013 [2024-12-16 02:58:17.556854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.013 [2024-12-16 02:58:17.556869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.013 qpair failed and we were unable to recover it. 00:36:47.013 [2024-12-16 02:58:17.566845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.566905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.566919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.566926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.013 [2024-12-16 02:58:17.566932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.013 [2024-12-16 02:58:17.566947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.013 qpair failed and we were unable to recover it. 00:36:47.013 [2024-12-16 02:58:17.576899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.576961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.576976] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.576983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.013 [2024-12-16 02:58:17.576989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.013 [2024-12-16 02:58:17.577003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.013 qpair failed and we were unable to recover it. 00:36:47.013 [2024-12-16 02:58:17.586912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.586965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.586980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.586987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.013 [2024-12-16 02:58:17.586994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.013 [2024-12-16 02:58:17.587009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.013 qpair failed and we were unable to recover it. 00:36:47.013 [2024-12-16 02:58:17.596948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.597011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.597025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.597032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.013 [2024-12-16 02:58:17.597038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.013 [2024-12-16 02:58:17.597052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.013 qpair failed and we were unable to recover it. 00:36:47.013 [2024-12-16 02:58:17.606983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.607061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.607075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.607082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.013 [2024-12-16 02:58:17.607088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.013 [2024-12-16 02:58:17.607102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.013 qpair failed and we were unable to recover it. 00:36:47.013 [2024-12-16 02:58:17.617029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.617093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.617109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.617117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.013 [2024-12-16 02:58:17.617123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.013 [2024-12-16 02:58:17.617138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.013 qpair failed and we were unable to recover it. 00:36:47.013 [2024-12-16 02:58:17.627031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.627090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.627104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.627112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.013 [2024-12-16 02:58:17.627118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.013 [2024-12-16 02:58:17.627133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.013 qpair failed and we were unable to recover it. 00:36:47.013 [2024-12-16 02:58:17.637093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.637151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.637165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.637172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.013 [2024-12-16 02:58:17.637178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.013 [2024-12-16 02:58:17.637193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.013 qpair failed and we were unable to recover it. 00:36:47.013 [2024-12-16 02:58:17.647131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.013 [2024-12-16 02:58:17.647230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.013 [2024-12-16 02:58:17.647245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.013 [2024-12-16 02:58:17.647251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.014 [2024-12-16 02:58:17.647257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.014 [2024-12-16 02:58:17.647272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.014 qpair failed and we were unable to recover it. 00:36:47.014 [2024-12-16 02:58:17.657172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.014 [2024-12-16 02:58:17.657254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.014 [2024-12-16 02:58:17.657268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.014 [2024-12-16 02:58:17.657275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.014 [2024-12-16 02:58:17.657285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.014 [2024-12-16 02:58:17.657299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.014 qpair failed and we were unable to recover it. 00:36:47.014 [2024-12-16 02:58:17.667088] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.014 [2024-12-16 02:58:17.667144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.014 [2024-12-16 02:58:17.667157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.014 [2024-12-16 02:58:17.667164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.014 [2024-12-16 02:58:17.667170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.014 [2024-12-16 02:58:17.667185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.014 qpair failed and we were unable to recover it. 00:36:47.274 [2024-12-16 02:58:17.677169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.274 [2024-12-16 02:58:17.677222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.274 [2024-12-16 02:58:17.677235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.274 [2024-12-16 02:58:17.677242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.274 [2024-12-16 02:58:17.677249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.274 [2024-12-16 02:58:17.677262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.274 qpair failed and we were unable to recover it. 00:36:47.274 [2024-12-16 02:58:17.687225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.274 [2024-12-16 02:58:17.687296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.274 [2024-12-16 02:58:17.687310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.274 [2024-12-16 02:58:17.687316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.274 [2024-12-16 02:58:17.687323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.274 [2024-12-16 02:58:17.687337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.274 qpair failed and we were unable to recover it. 00:36:47.274 [2024-12-16 02:58:17.697201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.274 [2024-12-16 02:58:17.697279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.274 [2024-12-16 02:58:17.697293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.274 [2024-12-16 02:58:17.697299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.274 [2024-12-16 02:58:17.697305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.274 [2024-12-16 02:58:17.697319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.274 qpair failed and we were unable to recover it. 00:36:47.274 [2024-12-16 02:58:17.707245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.274 [2024-12-16 02:58:17.707307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.274 [2024-12-16 02:58:17.707322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.274 [2024-12-16 02:58:17.707329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.274 [2024-12-16 02:58:17.707335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.274 [2024-12-16 02:58:17.707349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.274 qpair failed and we were unable to recover it. 00:36:47.274 [2024-12-16 02:58:17.717294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.274 [2024-12-16 02:58:17.717345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.274 [2024-12-16 02:58:17.717359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.274 [2024-12-16 02:58:17.717366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.274 [2024-12-16 02:58:17.717372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.274 [2024-12-16 02:58:17.717386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.274 qpair failed and we were unable to recover it. 00:36:47.274 [2024-12-16 02:58:17.727379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.274 [2024-12-16 02:58:17.727432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.274 [2024-12-16 02:58:17.727445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.274 [2024-12-16 02:58:17.727452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.274 [2024-12-16 02:58:17.727458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.274 [2024-12-16 02:58:17.727474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.274 qpair failed and we were unable to recover it. 00:36:47.274 [2024-12-16 02:58:17.737354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.274 [2024-12-16 02:58:17.737409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.274 [2024-12-16 02:58:17.737422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.274 [2024-12-16 02:58:17.737429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.274 [2024-12-16 02:58:17.737436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.274 [2024-12-16 02:58:17.737451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.274 qpair failed and we were unable to recover it. 00:36:47.274 [2024-12-16 02:58:17.747303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.274 [2024-12-16 02:58:17.747364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.274 [2024-12-16 02:58:17.747381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.274 [2024-12-16 02:58:17.747388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.747393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.747408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.275 qpair failed and we were unable to recover it. 00:36:47.275 [2024-12-16 02:58:17.757399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.275 [2024-12-16 02:58:17.757451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.275 [2024-12-16 02:58:17.757464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.275 [2024-12-16 02:58:17.757471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.757477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.757490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.275 qpair failed and we were unable to recover it. 00:36:47.275 [2024-12-16 02:58:17.767367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.275 [2024-12-16 02:58:17.767417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.275 [2024-12-16 02:58:17.767431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.275 [2024-12-16 02:58:17.767437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.767443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.767458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.275 qpair failed and we were unable to recover it. 00:36:47.275 [2024-12-16 02:58:17.777484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.275 [2024-12-16 02:58:17.777552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.275 [2024-12-16 02:58:17.777566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.275 [2024-12-16 02:58:17.777573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.777579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.777593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.275 qpair failed and we were unable to recover it. 00:36:47.275 [2024-12-16 02:58:17.787561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.275 [2024-12-16 02:58:17.787663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.275 [2024-12-16 02:58:17.787677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.275 [2024-12-16 02:58:17.787684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.787693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.787707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.275 qpair failed and we were unable to recover it. 00:36:47.275 [2024-12-16 02:58:17.797454] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.275 [2024-12-16 02:58:17.797506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.275 [2024-12-16 02:58:17.797520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.275 [2024-12-16 02:58:17.797526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.797533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.797547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.275 qpair failed and we were unable to recover it. 00:36:47.275 [2024-12-16 02:58:17.807539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.275 [2024-12-16 02:58:17.807593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.275 [2024-12-16 02:58:17.807607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.275 [2024-12-16 02:58:17.807614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.807620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.807634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.275 qpair failed and we were unable to recover it. 00:36:47.275 [2024-12-16 02:58:17.817584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.275 [2024-12-16 02:58:17.817641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.275 [2024-12-16 02:58:17.817655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.275 [2024-12-16 02:58:17.817661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.817668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.817682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.275 qpair failed and we were unable to recover it. 00:36:47.275 [2024-12-16 02:58:17.827598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.275 [2024-12-16 02:58:17.827652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.275 [2024-12-16 02:58:17.827666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.275 [2024-12-16 02:58:17.827673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.827679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.827694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.275 qpair failed and we were unable to recover it. 00:36:47.275 [2024-12-16 02:58:17.837643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.275 [2024-12-16 02:58:17.837700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.275 [2024-12-16 02:58:17.837714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.275 [2024-12-16 02:58:17.837721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.837727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.837742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.275 qpair failed and we were unable to recover it. 00:36:47.275 [2024-12-16 02:58:17.847672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.275 [2024-12-16 02:58:17.847725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.275 [2024-12-16 02:58:17.847739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.275 [2024-12-16 02:58:17.847746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.847752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.847767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.275 qpair failed and we were unable to recover it. 00:36:47.275 [2024-12-16 02:58:17.857684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.275 [2024-12-16 02:58:17.857765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.275 [2024-12-16 02:58:17.857779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.275 [2024-12-16 02:58:17.857785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.857792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.857806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.275 qpair failed and we were unable to recover it. 00:36:47.275 [2024-12-16 02:58:17.867727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.275 [2024-12-16 02:58:17.867814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.275 [2024-12-16 02:58:17.867827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.275 [2024-12-16 02:58:17.867834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.867840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.867858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.275 qpair failed and we were unable to recover it. 00:36:47.275 [2024-12-16 02:58:17.877763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.275 [2024-12-16 02:58:17.877814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.275 [2024-12-16 02:58:17.877831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.275 [2024-12-16 02:58:17.877838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.275 [2024-12-16 02:58:17.877844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.275 [2024-12-16 02:58:17.877873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.276 qpair failed and we were unable to recover it. 00:36:47.276 [2024-12-16 02:58:17.887796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.276 [2024-12-16 02:58:17.887857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.276 [2024-12-16 02:58:17.887872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.276 [2024-12-16 02:58:17.887879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.276 [2024-12-16 02:58:17.887886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.276 [2024-12-16 02:58:17.887900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.276 qpair failed and we were unable to recover it. 00:36:47.276 [2024-12-16 02:58:17.897807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.276 [2024-12-16 02:58:17.897864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.276 [2024-12-16 02:58:17.897878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.276 [2024-12-16 02:58:17.897885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.276 [2024-12-16 02:58:17.897892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.276 [2024-12-16 02:58:17.897906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.276 qpair failed and we were unable to recover it. 00:36:47.276 [2024-12-16 02:58:17.907831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.276 [2024-12-16 02:58:17.907941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.276 [2024-12-16 02:58:17.907955] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.276 [2024-12-16 02:58:17.907962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.276 [2024-12-16 02:58:17.907968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.276 [2024-12-16 02:58:17.907982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.276 qpair failed and we were unable to recover it. 00:36:47.276 [2024-12-16 02:58:17.917806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.276 [2024-12-16 02:58:17.917864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.276 [2024-12-16 02:58:17.917877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.276 [2024-12-16 02:58:17.917884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.276 [2024-12-16 02:58:17.917893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.276 [2024-12-16 02:58:17.917908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.276 qpair failed and we were unable to recover it. 00:36:47.276 [2024-12-16 02:58:17.927813] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.276 [2024-12-16 02:58:17.927867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.276 [2024-12-16 02:58:17.927881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.276 [2024-12-16 02:58:17.927888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.276 [2024-12-16 02:58:17.927894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.276 [2024-12-16 02:58:17.927909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.276 qpair failed and we were unable to recover it. 00:36:47.535 [2024-12-16 02:58:17.937920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.535 [2024-12-16 02:58:17.938012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.535 [2024-12-16 02:58:17.938026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.535 [2024-12-16 02:58:17.938033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.536 [2024-12-16 02:58:17.938040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.536 [2024-12-16 02:58:17.938054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.536 qpair failed and we were unable to recover it. 00:36:47.536 [2024-12-16 02:58:17.947943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.536 [2024-12-16 02:58:17.948032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.536 [2024-12-16 02:58:17.948046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.536 [2024-12-16 02:58:17.948052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.536 [2024-12-16 02:58:17.948058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.536 [2024-12-16 02:58:17.948072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.536 qpair failed and we were unable to recover it. 00:36:47.536 [2024-12-16 02:58:17.958014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.536 [2024-12-16 02:58:17.958082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.536 [2024-12-16 02:58:17.958095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.536 [2024-12-16 02:58:17.958102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.536 [2024-12-16 02:58:17.958108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.536 [2024-12-16 02:58:17.958121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.536 qpair failed and we were unable to recover it. 00:36:47.536 [2024-12-16 02:58:17.967994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.536 [2024-12-16 02:58:17.968085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.536 [2024-12-16 02:58:17.968098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.536 [2024-12-16 02:58:17.968104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.536 [2024-12-16 02:58:17.968111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.536 [2024-12-16 02:58:17.968125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.536 qpair failed and we were unable to recover it. 00:36:47.536 [2024-12-16 02:58:17.978058] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.536 [2024-12-16 02:58:17.978114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.536 [2024-12-16 02:58:17.978128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.536 [2024-12-16 02:58:17.978135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.536 [2024-12-16 02:58:17.978141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.536 [2024-12-16 02:58:17.978155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.536 qpair failed and we were unable to recover it. 00:36:47.536 [2024-12-16 02:58:17.988010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.536 [2024-12-16 02:58:17.988065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.536 [2024-12-16 02:58:17.988079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.536 [2024-12-16 02:58:17.988086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.536 [2024-12-16 02:58:17.988092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.536 [2024-12-16 02:58:17.988107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.536 qpair failed and we were unable to recover it. 00:36:47.536 [2024-12-16 02:58:17.998024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.536 [2024-12-16 02:58:17.998082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.536 [2024-12-16 02:58:17.998096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.536 [2024-12-16 02:58:17.998102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.536 [2024-12-16 02:58:17.998109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.536 [2024-12-16 02:58:17.998124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.536 qpair failed and we were unable to recover it. 00:36:47.536 [2024-12-16 02:58:18.008047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.536 [2024-12-16 02:58:18.008099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.536 [2024-12-16 02:58:18.008116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.536 [2024-12-16 02:58:18.008122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.536 [2024-12-16 02:58:18.008129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.536 [2024-12-16 02:58:18.008143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.536 qpair failed and we were unable to recover it. 00:36:47.536 [2024-12-16 02:58:18.018095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.536 [2024-12-16 02:58:18.018150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.536 [2024-12-16 02:58:18.018163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.536 [2024-12-16 02:58:18.018170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.536 [2024-12-16 02:58:18.018176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.536 [2024-12-16 02:58:18.018190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.536 qpair failed and we were unable to recover it. 00:36:47.536 [2024-12-16 02:58:18.028109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.536 [2024-12-16 02:58:18.028177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.536 [2024-12-16 02:58:18.028191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.536 [2024-12-16 02:58:18.028199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.536 [2024-12-16 02:58:18.028204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.536 [2024-12-16 02:58:18.028220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.536 qpair failed and we were unable to recover it. 00:36:47.536 [2024-12-16 02:58:18.038206] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.536 [2024-12-16 02:58:18.038263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.536 [2024-12-16 02:58:18.038277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.536 [2024-12-16 02:58:18.038284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.536 [2024-12-16 02:58:18.038290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.536 [2024-12-16 02:58:18.038305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.536 qpair failed and we were unable to recover it. 00:36:47.536 [2024-12-16 02:58:18.048278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.536 [2024-12-16 02:58:18.048335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.536 [2024-12-16 02:58:18.048349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.536 [2024-12-16 02:58:18.048357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.536 [2024-12-16 02:58:18.048366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.536 [2024-12-16 02:58:18.048381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.536 qpair failed and we were unable to recover it. 00:36:47.536 [2024-12-16 02:58:18.058273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.536 [2024-12-16 02:58:18.058341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.536 [2024-12-16 02:58:18.058354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.536 [2024-12-16 02:58:18.058361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.536 [2024-12-16 02:58:18.058367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.536 [2024-12-16 02:58:18.058382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.536 qpair failed and we were unable to recover it. 00:36:47.536 [2024-12-16 02:58:18.068289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.536 [2024-12-16 02:58:18.068345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.536 [2024-12-16 02:58:18.068359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.537 [2024-12-16 02:58:18.068365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.537 [2024-12-16 02:58:18.068372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.537 [2024-12-16 02:58:18.068386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.537 qpair failed and we were unable to recover it. 00:36:47.537 [2024-12-16 02:58:18.078361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.537 [2024-12-16 02:58:18.078451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.537 [2024-12-16 02:58:18.078465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.537 [2024-12-16 02:58:18.078471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.537 [2024-12-16 02:58:18.078477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.537 [2024-12-16 02:58:18.078492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.537 qpair failed and we were unable to recover it. 00:36:47.537 [2024-12-16 02:58:18.088340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.537 [2024-12-16 02:58:18.088422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.537 [2024-12-16 02:58:18.088436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.537 [2024-12-16 02:58:18.088443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.537 [2024-12-16 02:58:18.088449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.537 [2024-12-16 02:58:18.088464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.537 qpair failed and we were unable to recover it. 00:36:47.537 [2024-12-16 02:58:18.098358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.537 [2024-12-16 02:58:18.098419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.537 [2024-12-16 02:58:18.098433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.537 [2024-12-16 02:58:18.098439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.537 [2024-12-16 02:58:18.098446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.537 [2024-12-16 02:58:18.098460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.537 qpair failed and we were unable to recover it. 00:36:47.537 [2024-12-16 02:58:18.108431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.537 [2024-12-16 02:58:18.108500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.537 [2024-12-16 02:58:18.108514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.537 [2024-12-16 02:58:18.108520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.537 [2024-12-16 02:58:18.108526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.537 [2024-12-16 02:58:18.108541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.537 qpair failed and we were unable to recover it. 00:36:47.537 [2024-12-16 02:58:18.118488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.537 [2024-12-16 02:58:18.118544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.537 [2024-12-16 02:58:18.118559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.537 [2024-12-16 02:58:18.118566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.537 [2024-12-16 02:58:18.118573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.537 [2024-12-16 02:58:18.118588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.537 qpair failed and we were unable to recover it. 00:36:47.537 [2024-12-16 02:58:18.128461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.537 [2024-12-16 02:58:18.128529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.537 [2024-12-16 02:58:18.128543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.537 [2024-12-16 02:58:18.128550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.537 [2024-12-16 02:58:18.128557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.537 [2024-12-16 02:58:18.128572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.537 qpair failed and we were unable to recover it. 00:36:47.537 [2024-12-16 02:58:18.138490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.537 [2024-12-16 02:58:18.138549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.537 [2024-12-16 02:58:18.138566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.537 [2024-12-16 02:58:18.138573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.537 [2024-12-16 02:58:18.138579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.537 [2024-12-16 02:58:18.138594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.537 qpair failed and we were unable to recover it. 00:36:47.537 [2024-12-16 02:58:18.148511] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.537 [2024-12-16 02:58:18.148569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.537 [2024-12-16 02:58:18.148583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.537 [2024-12-16 02:58:18.148590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.537 [2024-12-16 02:58:18.148596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.537 [2024-12-16 02:58:18.148611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.537 qpair failed and we were unable to recover it. 00:36:47.537 [2024-12-16 02:58:18.158458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.537 [2024-12-16 02:58:18.158513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.537 [2024-12-16 02:58:18.158526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.537 [2024-12-16 02:58:18.158535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.537 [2024-12-16 02:58:18.158541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.537 [2024-12-16 02:58:18.158555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.537 qpair failed and we were unable to recover it. 00:36:47.537 [2024-12-16 02:58:18.168494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.537 [2024-12-16 02:58:18.168543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.537 [2024-12-16 02:58:18.168556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.537 [2024-12-16 02:58:18.168563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.537 [2024-12-16 02:58:18.168569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.537 [2024-12-16 02:58:18.168584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.537 qpair failed and we were unable to recover it. 00:36:47.537 [2024-12-16 02:58:18.178602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.537 [2024-12-16 02:58:18.178658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.537 [2024-12-16 02:58:18.178671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.537 [2024-12-16 02:58:18.178678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.537 [2024-12-16 02:58:18.178688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.537 [2024-12-16 02:58:18.178702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.537 qpair failed and we were unable to recover it. 00:36:47.537 [2024-12-16 02:58:18.188615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.537 [2024-12-16 02:58:18.188665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.537 [2024-12-16 02:58:18.188678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.537 [2024-12-16 02:58:18.188685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.537 [2024-12-16 02:58:18.188691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.537 [2024-12-16 02:58:18.188705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.537 qpair failed and we were unable to recover it. 00:36:47.798 [2024-12-16 02:58:18.198648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.798 [2024-12-16 02:58:18.198701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.798 [2024-12-16 02:58:18.198715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.798 [2024-12-16 02:58:18.198722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.798 [2024-12-16 02:58:18.198729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.798 [2024-12-16 02:58:18.198743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.798 qpair failed and we were unable to recover it. 00:36:47.798 [2024-12-16 02:58:18.208668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.798 [2024-12-16 02:58:18.208722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.798 [2024-12-16 02:58:18.208736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.798 [2024-12-16 02:58:18.208742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.798 [2024-12-16 02:58:18.208748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.798 [2024-12-16 02:58:18.208763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.798 qpair failed and we were unable to recover it. 00:36:47.798 [2024-12-16 02:58:18.218729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.798 [2024-12-16 02:58:18.218795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.798 [2024-12-16 02:58:18.218808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.798 [2024-12-16 02:58:18.218815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.798 [2024-12-16 02:58:18.218821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.798 [2024-12-16 02:58:18.218835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.798 qpair failed and we were unable to recover it. 00:36:47.798 [2024-12-16 02:58:18.228770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.798 [2024-12-16 02:58:18.228824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.798 [2024-12-16 02:58:18.228839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.798 [2024-12-16 02:58:18.228845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.798 [2024-12-16 02:58:18.228856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.798 [2024-12-16 02:58:18.228871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.798 qpair failed and we were unable to recover it. 00:36:47.798 [2024-12-16 02:58:18.238757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.798 [2024-12-16 02:58:18.238809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.798 [2024-12-16 02:58:18.238823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.798 [2024-12-16 02:58:18.238830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.798 [2024-12-16 02:58:18.238836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.798 [2024-12-16 02:58:18.238854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.798 qpair failed and we were unable to recover it. 00:36:47.798 [2024-12-16 02:58:18.248794] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.798 [2024-12-16 02:58:18.248851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.798 [2024-12-16 02:58:18.248865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.798 [2024-12-16 02:58:18.248872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.798 [2024-12-16 02:58:18.248878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.798 [2024-12-16 02:58:18.248893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.798 qpair failed and we were unable to recover it. 00:36:47.798 [2024-12-16 02:58:18.258875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.798 [2024-12-16 02:58:18.258938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.798 [2024-12-16 02:58:18.258951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.798 [2024-12-16 02:58:18.258958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.798 [2024-12-16 02:58:18.258965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.798 [2024-12-16 02:58:18.258979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.798 qpair failed and we were unable to recover it. 00:36:47.798 [2024-12-16 02:58:18.268844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.798 [2024-12-16 02:58:18.268909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.798 [2024-12-16 02:58:18.268926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.798 [2024-12-16 02:58:18.268933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.798 [2024-12-16 02:58:18.268939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.798 [2024-12-16 02:58:18.268954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.798 qpair failed and we were unable to recover it. 00:36:47.798 [2024-12-16 02:58:18.278927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.798 [2024-12-16 02:58:18.279024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.798 [2024-12-16 02:58:18.279039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.798 [2024-12-16 02:58:18.279045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.798 [2024-12-16 02:58:18.279052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.798 [2024-12-16 02:58:18.279066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.798 qpair failed and we were unable to recover it. 00:36:47.798 [2024-12-16 02:58:18.288925] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.798 [2024-12-16 02:58:18.288980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.798 [2024-12-16 02:58:18.288994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.798 [2024-12-16 02:58:18.289002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.798 [2024-12-16 02:58:18.289009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.798 [2024-12-16 02:58:18.289023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.798 qpair failed and we were unable to recover it. 00:36:47.798 [2024-12-16 02:58:18.298928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.798 [2024-12-16 02:58:18.298982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.798 [2024-12-16 02:58:18.298995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.798 [2024-12-16 02:58:18.299001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.798 [2024-12-16 02:58:18.299007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.798 [2024-12-16 02:58:18.299022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.798 qpair failed and we were unable to recover it. 00:36:47.798 [2024-12-16 02:58:18.308943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.798 [2024-12-16 02:58:18.308999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.798 [2024-12-16 02:58:18.309012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.798 [2024-12-16 02:58:18.309019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.798 [2024-12-16 02:58:18.309028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.798 [2024-12-16 02:58:18.309043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.798 qpair failed and we were unable to recover it. 00:36:47.798 [2024-12-16 02:58:18.318970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.798 [2024-12-16 02:58:18.319024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.798 [2024-12-16 02:58:18.319037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.798 [2024-12-16 02:58:18.319043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.798 [2024-12-16 02:58:18.319050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.319064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:47.799 [2024-12-16 02:58:18.329049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.799 [2024-12-16 02:58:18.329100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.799 [2024-12-16 02:58:18.329114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.799 [2024-12-16 02:58:18.329121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.799 [2024-12-16 02:58:18.329128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.329143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:47.799 [2024-12-16 02:58:18.339081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.799 [2024-12-16 02:58:18.339145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.799 [2024-12-16 02:58:18.339158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.799 [2024-12-16 02:58:18.339166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.799 [2024-12-16 02:58:18.339172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.339186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:47.799 [2024-12-16 02:58:18.349068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.799 [2024-12-16 02:58:18.349121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.799 [2024-12-16 02:58:18.349134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.799 [2024-12-16 02:58:18.349141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.799 [2024-12-16 02:58:18.349148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.349162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:47.799 [2024-12-16 02:58:18.359102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.799 [2024-12-16 02:58:18.359177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.799 [2024-12-16 02:58:18.359190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.799 [2024-12-16 02:58:18.359197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.799 [2024-12-16 02:58:18.359203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.359217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:47.799 [2024-12-16 02:58:18.369132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.799 [2024-12-16 02:58:18.369188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.799 [2024-12-16 02:58:18.369202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.799 [2024-12-16 02:58:18.369209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.799 [2024-12-16 02:58:18.369216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.369230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:47.799 [2024-12-16 02:58:18.379163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.799 [2024-12-16 02:58:18.379228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.799 [2024-12-16 02:58:18.379242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.799 [2024-12-16 02:58:18.379249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.799 [2024-12-16 02:58:18.379255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.379270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:47.799 [2024-12-16 02:58:18.389174] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.799 [2024-12-16 02:58:18.389232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.799 [2024-12-16 02:58:18.389246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.799 [2024-12-16 02:58:18.389252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.799 [2024-12-16 02:58:18.389259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.389273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:47.799 [2024-12-16 02:58:18.399210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.799 [2024-12-16 02:58:18.399279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.799 [2024-12-16 02:58:18.399295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.799 [2024-12-16 02:58:18.399302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.799 [2024-12-16 02:58:18.399308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.399323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:47.799 [2024-12-16 02:58:18.409235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.799 [2024-12-16 02:58:18.409332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.799 [2024-12-16 02:58:18.409346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.799 [2024-12-16 02:58:18.409352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.799 [2024-12-16 02:58:18.409358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.409372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:47.799 [2024-12-16 02:58:18.419251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.799 [2024-12-16 02:58:18.419309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.799 [2024-12-16 02:58:18.419322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.799 [2024-12-16 02:58:18.419329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.799 [2024-12-16 02:58:18.419336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.419350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:47.799 [2024-12-16 02:58:18.429298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.799 [2024-12-16 02:58:18.429365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.799 [2024-12-16 02:58:18.429379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.799 [2024-12-16 02:58:18.429386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.799 [2024-12-16 02:58:18.429392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.429406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:47.799 [2024-12-16 02:58:18.439321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.799 [2024-12-16 02:58:18.439389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.799 [2024-12-16 02:58:18.439404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.799 [2024-12-16 02:58:18.439410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.799 [2024-12-16 02:58:18.439420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.439434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:47.799 [2024-12-16 02:58:18.449352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:47.799 [2024-12-16 02:58:18.449408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:47.799 [2024-12-16 02:58:18.449422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:47.799 [2024-12-16 02:58:18.449429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:47.799 [2024-12-16 02:58:18.449435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:47.799 [2024-12-16 02:58:18.449449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:47.799 qpair failed and we were unable to recover it. 00:36:48.059 [2024-12-16 02:58:18.459373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.059 [2024-12-16 02:58:18.459429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.059 [2024-12-16 02:58:18.459443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.059 [2024-12-16 02:58:18.459449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.059 [2024-12-16 02:58:18.459456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.059 [2024-12-16 02:58:18.459470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.059 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.469321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.060 [2024-12-16 02:58:18.469382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.060 [2024-12-16 02:58:18.469396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.060 [2024-12-16 02:58:18.469403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.060 [2024-12-16 02:58:18.469409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.060 [2024-12-16 02:58:18.469424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.060 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.479405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.060 [2024-12-16 02:58:18.479456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.060 [2024-12-16 02:58:18.479471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.060 [2024-12-16 02:58:18.479478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.060 [2024-12-16 02:58:18.479484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.060 [2024-12-16 02:58:18.479498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.060 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.489516] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.060 [2024-12-16 02:58:18.489566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.060 [2024-12-16 02:58:18.489580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.060 [2024-12-16 02:58:18.489587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.060 [2024-12-16 02:58:18.489594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.060 [2024-12-16 02:58:18.489608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.060 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.499477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.060 [2024-12-16 02:58:18.499566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.060 [2024-12-16 02:58:18.499580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.060 [2024-12-16 02:58:18.499587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.060 [2024-12-16 02:58:18.499593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.060 [2024-12-16 02:58:18.499608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.060 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.509461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.060 [2024-12-16 02:58:18.509537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.060 [2024-12-16 02:58:18.509550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.060 [2024-12-16 02:58:18.509557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.060 [2024-12-16 02:58:18.509564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.060 [2024-12-16 02:58:18.509579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.060 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.519543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.060 [2024-12-16 02:58:18.519602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.060 [2024-12-16 02:58:18.519619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.060 [2024-12-16 02:58:18.519626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.060 [2024-12-16 02:58:18.519632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.060 [2024-12-16 02:58:18.519648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.060 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.529590] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.060 [2024-12-16 02:58:18.529650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.060 [2024-12-16 02:58:18.529668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.060 [2024-12-16 02:58:18.529676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.060 [2024-12-16 02:58:18.529682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.060 [2024-12-16 02:58:18.529697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.060 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.539546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.060 [2024-12-16 02:58:18.539605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.060 [2024-12-16 02:58:18.539620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.060 [2024-12-16 02:58:18.539627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.060 [2024-12-16 02:58:18.539634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.060 [2024-12-16 02:58:18.539649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.060 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.549536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.060 [2024-12-16 02:58:18.549621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.060 [2024-12-16 02:58:18.549634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.060 [2024-12-16 02:58:18.549641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.060 [2024-12-16 02:58:18.549647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.060 [2024-12-16 02:58:18.549661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.060 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.559568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.060 [2024-12-16 02:58:18.559623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.060 [2024-12-16 02:58:18.559636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.060 [2024-12-16 02:58:18.559643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.060 [2024-12-16 02:58:18.559649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.060 [2024-12-16 02:58:18.559664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.060 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.569668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.060 [2024-12-16 02:58:18.569719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.060 [2024-12-16 02:58:18.569732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.060 [2024-12-16 02:58:18.569739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.060 [2024-12-16 02:58:18.569748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.060 [2024-12-16 02:58:18.569763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.060 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.579714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.060 [2024-12-16 02:58:18.579770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.060 [2024-12-16 02:58:18.579784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.060 [2024-12-16 02:58:18.579791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.060 [2024-12-16 02:58:18.579797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.060 [2024-12-16 02:58:18.579811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.060 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.589729] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.060 [2024-12-16 02:58:18.589784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.060 [2024-12-16 02:58:18.589797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.060 [2024-12-16 02:58:18.589804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.060 [2024-12-16 02:58:18.589810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.060 [2024-12-16 02:58:18.589824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.060 qpair failed and we were unable to recover it. 00:36:48.060 [2024-12-16 02:58:18.599811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.061 [2024-12-16 02:58:18.599869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.061 [2024-12-16 02:58:18.599883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.061 [2024-12-16 02:58:18.599890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.061 [2024-12-16 02:58:18.599897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.061 [2024-12-16 02:58:18.599911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.061 qpair failed and we were unable to recover it. 00:36:48.061 [2024-12-16 02:58:18.609769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.061 [2024-12-16 02:58:18.609821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.061 [2024-12-16 02:58:18.609834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.061 [2024-12-16 02:58:18.609841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.061 [2024-12-16 02:58:18.609850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.061 [2024-12-16 02:58:18.609865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.061 qpair failed and we were unable to recover it. 00:36:48.061 [2024-12-16 02:58:18.619840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.061 [2024-12-16 02:58:18.619901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.061 [2024-12-16 02:58:18.619915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.061 [2024-12-16 02:58:18.619921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.061 [2024-12-16 02:58:18.619927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.061 [2024-12-16 02:58:18.619942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.061 qpair failed and we were unable to recover it. 00:36:48.061 [2024-12-16 02:58:18.629890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.061 [2024-12-16 02:58:18.629949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.061 [2024-12-16 02:58:18.629963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.061 [2024-12-16 02:58:18.629970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.061 [2024-12-16 02:58:18.629978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.061 [2024-12-16 02:58:18.629993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.061 qpair failed and we were unable to recover it. 00:36:48.061 [2024-12-16 02:58:18.639868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.061 [2024-12-16 02:58:18.639924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.061 [2024-12-16 02:58:18.639938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.061 [2024-12-16 02:58:18.639945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.061 [2024-12-16 02:58:18.639951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.061 [2024-12-16 02:58:18.639966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.061 qpair failed and we were unable to recover it. 00:36:48.061 [2024-12-16 02:58:18.649889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.061 [2024-12-16 02:58:18.649951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.061 [2024-12-16 02:58:18.649966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.061 [2024-12-16 02:58:18.649973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.061 [2024-12-16 02:58:18.649979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.061 [2024-12-16 02:58:18.649994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.061 qpair failed and we were unable to recover it. 00:36:48.061 [2024-12-16 02:58:18.659887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.061 [2024-12-16 02:58:18.659964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.061 [2024-12-16 02:58:18.659982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.061 [2024-12-16 02:58:18.659989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.061 [2024-12-16 02:58:18.659995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.061 [2024-12-16 02:58:18.660010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.061 qpair failed and we were unable to recover it. 00:36:48.061 [2024-12-16 02:58:18.669945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.061 [2024-12-16 02:58:18.670003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.061 [2024-12-16 02:58:18.670016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.061 [2024-12-16 02:58:18.670023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.061 [2024-12-16 02:58:18.670030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.061 [2024-12-16 02:58:18.670044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.061 qpair failed and we were unable to recover it. 00:36:48.061 [2024-12-16 02:58:18.679972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.061 [2024-12-16 02:58:18.680070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.061 [2024-12-16 02:58:18.680084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.061 [2024-12-16 02:58:18.680091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.061 [2024-12-16 02:58:18.680097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.061 [2024-12-16 02:58:18.680112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.061 qpair failed and we were unable to recover it. 00:36:48.061 [2024-12-16 02:58:18.689930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.061 [2024-12-16 02:58:18.690026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.061 [2024-12-16 02:58:18.690039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.061 [2024-12-16 02:58:18.690046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.061 [2024-12-16 02:58:18.690052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.061 [2024-12-16 02:58:18.690066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.061 qpair failed and we were unable to recover it. 00:36:48.061 [2024-12-16 02:58:18.700051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.061 [2024-12-16 02:58:18.700109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.061 [2024-12-16 02:58:18.700122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.061 [2024-12-16 02:58:18.700130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.061 [2024-12-16 02:58:18.700139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.061 [2024-12-16 02:58:18.700154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.061 qpair failed and we were unable to recover it. 00:36:48.061 [2024-12-16 02:58:18.710064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.061 [2024-12-16 02:58:18.710114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.061 [2024-12-16 02:58:18.710127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.061 [2024-12-16 02:58:18.710134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.061 [2024-12-16 02:58:18.710140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.061 [2024-12-16 02:58:18.710155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.061 qpair failed and we were unable to recover it. 00:36:48.321 [2024-12-16 02:58:18.720072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.321 [2024-12-16 02:58:18.720136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.321 [2024-12-16 02:58:18.720150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.321 [2024-12-16 02:58:18.720157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.321 [2024-12-16 02:58:18.720164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.321 [2024-12-16 02:58:18.720178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.321 qpair failed and we were unable to recover it. 00:36:48.321 [2024-12-16 02:58:18.730130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.322 [2024-12-16 02:58:18.730188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.322 [2024-12-16 02:58:18.730203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.322 [2024-12-16 02:58:18.730210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.322 [2024-12-16 02:58:18.730216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.322 [2024-12-16 02:58:18.730231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.322 qpair failed and we were unable to recover it. 00:36:48.322 [2024-12-16 02:58:18.740169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.322 [2024-12-16 02:58:18.740222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.322 [2024-12-16 02:58:18.740236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.322 [2024-12-16 02:58:18.740242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.322 [2024-12-16 02:58:18.740250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xd9fcd0 00:36:48.322 [2024-12-16 02:58:18.740264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:48.322 qpair failed and we were unable to recover it. 00:36:48.322 [2024-12-16 02:58:18.750180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.322 [2024-12-16 02:58:18.750322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.322 [2024-12-16 02:58:18.750379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.322 [2024-12-16 02:58:18.750404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.322 [2024-12-16 02:58:18.750424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f273c000b90 00:36:48.322 [2024-12-16 02:58:18.750476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:48.322 qpair failed and we were unable to recover it. 00:36:48.322 [2024-12-16 02:58:18.760142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.322 [2024-12-16 02:58:18.760249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.322 [2024-12-16 02:58:18.760275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.322 [2024-12-16 02:58:18.760290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.322 [2024-12-16 02:58:18.760303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f273c000b90 00:36:48.322 [2024-12-16 02:58:18.760334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:48.322 qpair failed and we were unable to recover it. 00:36:48.322 [2024-12-16 02:58:18.770277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.322 [2024-12-16 02:58:18.770373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.322 [2024-12-16 02:58:18.770426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.322 [2024-12-16 02:58:18.770450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.322 [2024-12-16 02:58:18.770470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2738000b90 00:36:48.322 [2024-12-16 02:58:18.770522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:48.322 qpair failed and we were unable to recover it. 00:36:48.322 [2024-12-16 02:58:18.780274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:48.322 [2024-12-16 02:58:18.780347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:48.322 [2024-12-16 02:58:18.780374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:48.322 [2024-12-16 02:58:18.780389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:48.322 [2024-12-16 02:58:18.780402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f2738000b90 00:36:48.322 [2024-12-16 02:58:18.780433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:48.322 qpair failed and we were unable to recover it. 00:36:48.322 [2024-12-16 02:58:18.780531] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:48.322 A controller has encountered a failure and is being reset. 00:36:48.322 Controller properly reset. 00:36:48.322 Initializing NVMe Controllers 00:36:48.322 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:48.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:48.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:48.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:48.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:48.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:48.322 Initialization complete. Launching workers. 00:36:48.322 Starting thread on core 1 00:36:48.322 Starting thread on core 2 00:36:48.322 Starting thread on core 3 00:36:48.322 Starting thread on core 0 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:48.322 00:36:48.322 real 0m10.753s 00:36:48.322 user 0m19.120s 00:36:48.322 sys 0m4.830s 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:48.322 ************************************ 00:36:48.322 END TEST nvmf_target_disconnect_tc2 00:36:48.322 ************************************ 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:48.322 rmmod nvme_tcp 00:36:48.322 rmmod nvme_fabrics 00:36:48.322 rmmod nvme_keyring 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1210775 ']' 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1210775 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1210775 ']' 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1210775 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1210775 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1210775' 00:36:48.322 killing process with pid 1210775 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1210775 00:36:48.322 02:58:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1210775 00:36:48.581 02:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:48.581 02:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:48.582 02:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:48.582 02:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:48.582 02:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:48.582 02:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:48.582 02:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:48.582 02:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:48.582 02:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:48.582 02:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.582 02:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:48.582 02:58:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.117 02:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:51.117 00:36:51.117 real 0m19.535s 00:36:51.117 user 0m46.825s 00:36:51.117 sys 0m9.735s 00:36:51.117 02:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.117 02:58:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:51.117 ************************************ 00:36:51.117 END TEST nvmf_target_disconnect 00:36:51.117 ************************************ 00:36:51.117 02:58:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:51.117 00:36:51.117 real 7m22.483s 00:36:51.117 user 16m51.696s 00:36:51.117 sys 2m9.130s 00:36:51.117 02:58:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:51.117 02:58:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:51.117 ************************************ 00:36:51.117 END TEST nvmf_host 00:36:51.117 ************************************ 00:36:51.117 02:58:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:51.117 02:58:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:51.117 02:58:21 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:51.117 02:58:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:51.117 02:58:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:51.118 02:58:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:51.118 ************************************ 00:36:51.118 START TEST nvmf_target_core_interrupt_mode 00:36:51.118 ************************************ 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:51.118 * Looking for test storage... 00:36:51.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:51.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.118 --rc genhtml_branch_coverage=1 00:36:51.118 --rc genhtml_function_coverage=1 00:36:51.118 --rc genhtml_legend=1 00:36:51.118 --rc geninfo_all_blocks=1 00:36:51.118 --rc geninfo_unexecuted_blocks=1 00:36:51.118 00:36:51.118 ' 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:51.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.118 --rc genhtml_branch_coverage=1 00:36:51.118 --rc genhtml_function_coverage=1 00:36:51.118 --rc genhtml_legend=1 00:36:51.118 --rc geninfo_all_blocks=1 00:36:51.118 --rc geninfo_unexecuted_blocks=1 00:36:51.118 00:36:51.118 ' 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:51.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.118 --rc genhtml_branch_coverage=1 00:36:51.118 --rc genhtml_function_coverage=1 00:36:51.118 --rc genhtml_legend=1 00:36:51.118 --rc geninfo_all_blocks=1 00:36:51.118 --rc geninfo_unexecuted_blocks=1 00:36:51.118 00:36:51.118 ' 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:51.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.118 --rc genhtml_branch_coverage=1 00:36:51.118 --rc genhtml_function_coverage=1 00:36:51.118 --rc genhtml_legend=1 00:36:51.118 --rc geninfo_all_blocks=1 00:36:51.118 --rc geninfo_unexecuted_blocks=1 00:36:51.118 00:36:51.118 ' 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:51.118 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:51.119 ************************************ 00:36:51.119 START TEST nvmf_abort 00:36:51.119 ************************************ 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:51.119 * Looking for test storage... 00:36:51.119 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:51.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.119 --rc genhtml_branch_coverage=1 00:36:51.119 --rc genhtml_function_coverage=1 00:36:51.119 --rc genhtml_legend=1 00:36:51.119 --rc geninfo_all_blocks=1 00:36:51.119 --rc geninfo_unexecuted_blocks=1 00:36:51.119 00:36:51.119 ' 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:51.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.119 --rc genhtml_branch_coverage=1 00:36:51.119 --rc genhtml_function_coverage=1 00:36:51.119 --rc genhtml_legend=1 00:36:51.119 --rc geninfo_all_blocks=1 00:36:51.119 --rc geninfo_unexecuted_blocks=1 00:36:51.119 00:36:51.119 ' 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:51.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.119 --rc genhtml_branch_coverage=1 00:36:51.119 --rc genhtml_function_coverage=1 00:36:51.119 --rc genhtml_legend=1 00:36:51.119 --rc geninfo_all_blocks=1 00:36:51.119 --rc geninfo_unexecuted_blocks=1 00:36:51.119 00:36:51.119 ' 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:51.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:51.119 --rc genhtml_branch_coverage=1 00:36:51.119 --rc genhtml_function_coverage=1 00:36:51.119 --rc genhtml_legend=1 00:36:51.119 --rc geninfo_all_blocks=1 00:36:51.119 --rc geninfo_unexecuted_blocks=1 00:36:51.119 00:36:51.119 ' 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.119 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:51.120 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:51.379 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:51.379 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:51.379 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:51.379 02:58:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:57.948 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:57.948 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:57.948 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:57.949 Found net devices under 0000:af:00.0: cvl_0_0 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:57.949 Found net devices under 0000:af:00.1: cvl_0_1 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:57.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:57.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:36:57.949 00:36:57.949 --- 10.0.0.2 ping statistics --- 00:36:57.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.949 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:57.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:57.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:36:57.949 00:36:57.949 --- 10.0.0.1 ping statistics --- 00:36:57.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:57.949 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1215226 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1215226 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1215226 ']' 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:57.949 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.949 [2024-12-16 02:58:27.718109] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:57.949 [2024-12-16 02:58:27.719052] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:57.949 [2024-12-16 02:58:27.719096] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:57.949 [2024-12-16 02:58:27.800565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:57.949 [2024-12-16 02:58:27.822785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:57.949 [2024-12-16 02:58:27.822822] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:57.949 [2024-12-16 02:58:27.822830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:57.949 [2024-12-16 02:58:27.822837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:57.949 [2024-12-16 02:58:27.822842] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:57.949 [2024-12-16 02:58:27.824118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:57.949 [2024-12-16 02:58:27.824227] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:57.949 [2024-12-16 02:58:27.824228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:57.950 [2024-12-16 02:58:27.886229] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:57.950 [2024-12-16 02:58:27.887064] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:57.950 [2024-12-16 02:58:27.887418] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:57.950 [2024-12-16 02:58:27.887538] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:57.950 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:57.950 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:57.950 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:57.950 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:57.950 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.950 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:57.950 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:57.950 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.950 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.950 [2024-12-16 02:58:27.960966] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:57.950 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.950 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:57.950 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.950 02:58:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.950 Malloc0 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.950 Delay0 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.950 [2024-12-16 02:58:28.048861] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:57.950 02:58:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:57.950 [2024-12-16 02:58:28.176166] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:59.853 Initializing NVMe Controllers 00:36:59.853 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:59.853 controller IO queue size 128 less than required 00:36:59.853 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:59.853 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:59.853 Initialization complete. Launching workers. 00:36:59.853 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37649 00:36:59.853 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37706, failed to submit 66 00:36:59.853 success 37649, unsuccessful 57, failed 0 00:36:59.853 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:59.853 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:59.853 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:59.853 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:59.853 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:59.853 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:59.853 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:59.853 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:59.854 rmmod nvme_tcp 00:36:59.854 rmmod nvme_fabrics 00:36:59.854 rmmod nvme_keyring 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1215226 ']' 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1215226 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1215226 ']' 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1215226 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1215226 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1215226' 00:36:59.854 killing process with pid 1215226 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1215226 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1215226 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:59.854 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:37:00.113 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:00.113 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:00.113 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:00.113 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:00.113 02:58:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:02.017 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:02.017 00:37:02.017 real 0m11.015s 00:37:02.017 user 0m10.224s 00:37:02.017 sys 0m5.583s 00:37:02.017 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:02.017 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:02.017 ************************************ 00:37:02.017 END TEST nvmf_abort 00:37:02.017 ************************************ 00:37:02.017 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:02.017 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:02.017 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:02.017 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:02.017 ************************************ 00:37:02.017 START TEST nvmf_ns_hotplug_stress 00:37:02.017 ************************************ 00:37:02.017 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:02.277 * Looking for test storage... 00:37:02.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:02.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.277 --rc genhtml_branch_coverage=1 00:37:02.277 --rc genhtml_function_coverage=1 00:37:02.277 --rc genhtml_legend=1 00:37:02.277 --rc geninfo_all_blocks=1 00:37:02.277 --rc geninfo_unexecuted_blocks=1 00:37:02.277 00:37:02.277 ' 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:02.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.277 --rc genhtml_branch_coverage=1 00:37:02.277 --rc genhtml_function_coverage=1 00:37:02.277 --rc genhtml_legend=1 00:37:02.277 --rc geninfo_all_blocks=1 00:37:02.277 --rc geninfo_unexecuted_blocks=1 00:37:02.277 00:37:02.277 ' 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:02.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.277 --rc genhtml_branch_coverage=1 00:37:02.277 --rc genhtml_function_coverage=1 00:37:02.277 --rc genhtml_legend=1 00:37:02.277 --rc geninfo_all_blocks=1 00:37:02.277 --rc geninfo_unexecuted_blocks=1 00:37:02.277 00:37:02.277 ' 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:02.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:02.277 --rc genhtml_branch_coverage=1 00:37:02.277 --rc genhtml_function_coverage=1 00:37:02.277 --rc genhtml_legend=1 00:37:02.277 --rc geninfo_all_blocks=1 00:37:02.277 --rc geninfo_unexecuted_blocks=1 00:37:02.277 00:37:02.277 ' 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:02.277 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:37:02.278 02:58:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:08.848 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:08.848 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:08.849 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:08.849 Found net devices under 0000:af:00.0: cvl_0_0 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:08.849 Found net devices under 0000:af:00.1: cvl_0_1 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:08.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:08.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:37:08.849 00:37:08.849 --- 10.0.0.2 ping statistics --- 00:37:08.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.849 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:08.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:08.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:37:08.849 00:37:08.849 --- 10.0.0.1 ping statistics --- 00:37:08.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:08.849 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1219143 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1219143 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1219143 ']' 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:08.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:08.849 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:08.849 [2024-12-16 02:58:38.757639] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:08.849 [2024-12-16 02:58:38.758538] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:08.849 [2024-12-16 02:58:38.758570] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:08.849 [2024-12-16 02:58:38.817957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:08.849 [2024-12-16 02:58:38.840015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:08.849 [2024-12-16 02:58:38.840049] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:08.849 [2024-12-16 02:58:38.840056] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:08.850 [2024-12-16 02:58:38.840061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:08.850 [2024-12-16 02:58:38.840066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:08.850 [2024-12-16 02:58:38.841370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:08.850 [2024-12-16 02:58:38.841481] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:08.850 [2024-12-16 02:58:38.841482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:08.850 [2024-12-16 02:58:38.904539] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:08.850 [2024-12-16 02:58:38.905312] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:08.850 [2024-12-16 02:58:38.905731] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:08.850 [2024-12-16 02:58:38.905830] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:08.850 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:08.850 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:37:08.850 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:08.850 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:08.850 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:08.850 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:08.850 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:37:08.850 02:58:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:08.850 [2024-12-16 02:58:39.142143] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:08.850 02:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:08.850 02:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:09.109 [2024-12-16 02:58:39.522582] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:09.109 02:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:09.109 02:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:37:09.368 Malloc0 00:37:09.368 02:58:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:09.627 Delay0 00:37:09.627 02:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:09.886 02:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:37:09.886 NULL1 00:37:09.886 02:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:37:10.144 02:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1219407 00:37:10.144 02:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:10.144 02:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:37:10.144 02:58:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:11.521 Read completed with error (sct=0, sc=11) 00:37:11.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.521 02:58:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:11.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.521 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:11.521 02:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:37:11.521 02:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:37:11.780 true 00:37:11.780 02:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:11.780 02:58:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:12.716 02:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:12.976 02:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:37:12.976 02:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:37:12.976 true 00:37:12.976 02:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:12.976 02:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:13.235 02:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:13.494 02:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:37:13.494 02:58:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:37:13.753 true 00:37:13.753 02:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:13.753 02:58:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:14.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:14.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:14.688 02:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:14.688 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:14.946 02:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:37:14.946 02:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:37:14.946 true 00:37:15.203 02:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:15.204 02:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:15.204 02:58:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:15.462 02:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:37:15.462 02:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:37:15.721 true 00:37:15.721 02:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:15.721 02:58:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.658 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.658 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:16.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.917 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:16.917 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:37:16.917 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:37:17.176 true 00:37:17.176 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:17.176 02:58:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:18.112 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:18.112 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:37:18.112 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:37:18.371 true 00:37:18.371 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:18.371 02:58:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:18.630 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:18.888 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:37:18.888 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:37:18.888 true 00:37:18.888 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:18.888 02:58:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:20.265 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:20.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:20.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:20.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:20.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:20.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:20.265 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:20.265 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:37:20.265 02:58:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:37:20.524 true 00:37:20.524 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:20.524 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:21.460 02:58:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:21.460 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:37:21.460 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:37:21.719 true 00:37:21.719 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:21.719 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:21.978 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:21.978 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:37:21.978 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:37:22.237 true 00:37:22.237 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:22.237 02:58:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:23.173 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:23.173 02:58:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:23.432 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:37:23.432 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:37:23.691 true 00:37:23.691 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:23.691 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:23.950 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:24.209 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:37:24.209 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:37:24.209 true 00:37:24.209 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:24.209 02:58:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:25.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.586 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:25.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.586 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.586 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:37:25.586 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:37:25.845 true 00:37:25.845 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:25.845 02:58:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:26.781 02:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:27.040 02:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:37:27.040 02:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:37:27.040 true 00:37:27.040 02:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:27.040 02:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:27.298 02:58:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:27.557 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:37:27.557 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:37:27.816 true 00:37:27.816 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:27.816 02:58:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:28.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.753 02:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:28.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.011 02:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:37:29.011 02:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:37:29.270 true 00:37:29.270 02:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:29.270 02:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.529 02:58:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:29.529 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:37:29.529 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:37:29.788 true 00:37:29.788 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:29.788 02:59:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:31.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.169 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:31.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.169 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:37:31.169 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:37:31.484 true 00:37:31.484 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:31.485 02:59:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:32.117 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:32.117 02:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:32.376 02:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:37:32.376 02:59:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:32.634 true 00:37:32.634 02:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:32.634 02:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:32.892 02:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:32.892 02:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:32.892 02:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:33.151 true 00:37:33.151 02:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:33.151 02:59:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:34.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.527 02:59:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:34.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.527 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.527 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:34.527 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:34.789 true 00:37:34.789 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:34.789 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:35.051 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:35.308 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:35.308 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:35.308 true 00:37:35.308 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:35.308 02:59:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:36.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.684 02:59:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:36.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.684 02:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:36.684 02:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:36.943 true 00:37:36.943 02:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:36.943 02:59:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:37.879 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:37.879 02:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:37.879 02:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:37.879 02:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:38.138 true 00:37:38.138 02:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:38.138 02:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:38.396 02:59:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:38.396 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:38.396 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:38.655 true 00:37:38.655 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:38.655 02:59:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:40.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.031 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:40.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.031 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:40.031 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:40.031 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:40.289 true 00:37:40.289 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:40.289 02:59:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:41.224 02:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:41.224 Initializing NVMe Controllers 00:37:41.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:41.224 Controller IO queue size 128, less than required. 00:37:41.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:41.224 Controller IO queue size 128, less than required. 00:37:41.224 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:41.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:41.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:41.224 Initialization complete. Launching workers. 00:37:41.224 ======================================================== 00:37:41.224 Latency(us) 00:37:41.224 Device Information : IOPS MiB/s Average min max 00:37:41.224 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2035.63 0.99 43167.21 2483.42 1012469.25 00:37:41.224 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17949.10 8.76 7131.17 1558.92 331055.62 00:37:41.224 ======================================================== 00:37:41.224 Total : 19984.73 9.76 10801.78 1558.92 1012469.25 00:37:41.224 00:37:41.224 02:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:41.224 02:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:41.483 true 00:37:41.483 02:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1219407 00:37:41.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1219407) - No such process 00:37:41.483 02:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1219407 00:37:41.483 02:59:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:41.483 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:41.741 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:41.741 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:41.741 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:41.741 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:41.741 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:41.999 null0 00:37:41.999 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:41.999 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:41.999 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:41.999 null1 00:37:42.258 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:42.258 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:42.258 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:42.258 null2 00:37:42.258 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:42.258 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:42.258 02:59:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:42.517 null3 00:37:42.517 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:42.517 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:42.517 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:42.776 null4 00:37:42.776 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:42.776 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:42.776 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:42.776 null5 00:37:42.776 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:42.776 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:42.776 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:43.035 null6 00:37:43.035 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:43.035 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:43.035 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:43.294 null7 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:43.294 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1224746 1224749 1224751 1224754 1224758 1224761 1224763 1224766 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.295 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:43.554 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:43.554 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:43.554 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:43.554 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:43.554 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:43.554 02:59:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:43.554 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:43.813 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:43.813 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:43.813 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:43.813 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:43.813 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:43.813 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:43.813 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:43.813 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.072 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:44.330 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:44.330 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:44.330 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:44.330 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:44.330 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:44.330 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:44.330 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.330 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:44.589 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.589 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.589 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:44.589 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.589 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.589 02:59:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.589 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:44.590 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:44.590 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:44.590 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:44.590 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:44.590 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:44.590 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:44.590 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.590 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:44.848 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:45.105 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:45.105 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:45.106 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:45.106 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:45.106 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:45.106 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:45.106 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:45.106 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:45.363 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.363 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.363 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:45.363 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.363 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.363 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.363 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.363 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:45.363 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:45.363 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.364 02:59:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.622 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:45.880 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:46.139 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.139 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.139 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:46.139 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.139 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.139 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:46.139 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.139 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.139 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:46.139 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.139 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.139 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:46.140 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.140 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.140 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.140 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:46.140 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.140 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.140 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.140 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:46.140 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:46.140 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.140 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.140 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:46.398 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:46.398 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:46.398 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:46.398 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:46.398 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:46.398 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:46.398 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:46.398 02:59:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:46.657 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.657 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.657 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:46.657 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.657 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.657 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:46.657 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.657 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:46.658 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:46.917 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:47.176 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:47.176 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:47.176 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:47.176 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:47.176 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.176 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:47.176 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:47.176 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:47.436 02:59:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:47.436 rmmod nvme_tcp 00:37:47.436 rmmod nvme_fabrics 00:37:47.436 rmmod nvme_keyring 00:37:47.436 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:47.436 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:47.436 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:47.436 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1219143 ']' 00:37:47.436 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1219143 00:37:47.436 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1219143 ']' 00:37:47.436 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1219143 00:37:47.436 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:47.436 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:47.436 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1219143 00:37:47.436 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1219143' 00:37:47.695 killing process with pid 1219143 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1219143 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1219143 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:47.695 02:59:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:50.231 00:37:50.231 real 0m47.678s 00:37:50.231 user 2m58.913s 00:37:50.231 sys 0m19.501s 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:50.231 ************************************ 00:37:50.231 END TEST nvmf_ns_hotplug_stress 00:37:50.231 ************************************ 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:50.231 ************************************ 00:37:50.231 START TEST nvmf_delete_subsystem 00:37:50.231 ************************************ 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:50.231 * Looking for test storage... 00:37:50.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:50.231 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:50.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.232 --rc genhtml_branch_coverage=1 00:37:50.232 --rc genhtml_function_coverage=1 00:37:50.232 --rc genhtml_legend=1 00:37:50.232 --rc geninfo_all_blocks=1 00:37:50.232 --rc geninfo_unexecuted_blocks=1 00:37:50.232 00:37:50.232 ' 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:50.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.232 --rc genhtml_branch_coverage=1 00:37:50.232 --rc genhtml_function_coverage=1 00:37:50.232 --rc genhtml_legend=1 00:37:50.232 --rc geninfo_all_blocks=1 00:37:50.232 --rc geninfo_unexecuted_blocks=1 00:37:50.232 00:37:50.232 ' 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:50.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.232 --rc genhtml_branch_coverage=1 00:37:50.232 --rc genhtml_function_coverage=1 00:37:50.232 --rc genhtml_legend=1 00:37:50.232 --rc geninfo_all_blocks=1 00:37:50.232 --rc geninfo_unexecuted_blocks=1 00:37:50.232 00:37:50.232 ' 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:50.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:50.232 --rc genhtml_branch_coverage=1 00:37:50.232 --rc genhtml_function_coverage=1 00:37:50.232 --rc genhtml_legend=1 00:37:50.232 --rc geninfo_all_blocks=1 00:37:50.232 --rc geninfo_unexecuted_blocks=1 00:37:50.232 00:37:50.232 ' 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:50.232 02:59:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:56.801 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:56.801 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:56.801 Found net devices under 0000:af:00.0: cvl_0_0 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:56.801 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:56.802 Found net devices under 0000:af:00.1: cvl_0_1 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:56.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:56.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:37:56.802 00:37:56.802 --- 10.0.0.2 ping statistics --- 00:37:56.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.802 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:56.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:56.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:37:56.802 00:37:56.802 --- 10.0.0.1 ping statistics --- 00:37:56.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:56.802 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1228945 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1228945 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1228945 ']' 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:56.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.802 [2024-12-16 02:59:26.590825] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:56.802 [2024-12-16 02:59:26.591692] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:56.802 [2024-12-16 02:59:26.591724] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:56.802 [2024-12-16 02:59:26.667486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:56.802 [2024-12-16 02:59:26.689793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:56.802 [2024-12-16 02:59:26.689829] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:56.802 [2024-12-16 02:59:26.689836] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:56.802 [2024-12-16 02:59:26.689843] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:56.802 [2024-12-16 02:59:26.689857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:56.802 [2024-12-16 02:59:26.694867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:56.802 [2024-12-16 02:59:26.694871] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:56.802 [2024-12-16 02:59:26.757801] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:56.802 [2024-12-16 02:59:26.757867] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:56.802 [2024-12-16 02:59:26.758018] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.802 [2024-12-16 02:59:26.835548] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.802 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.802 [2024-12-16 02:59:26.863947] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.803 NULL1 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.803 Delay0 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1229116 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:56.803 02:59:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:56.803 [2024-12-16 02:59:26.975173] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:58.705 02:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:58.705 02:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:58.705 02:59:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.705 starting I/O failed: -6 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.705 Write completed with error (sct=0, sc=8) 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.705 starting I/O failed: -6 00:37:58.705 Write completed with error (sct=0, sc=8) 00:37:58.705 Write completed with error (sct=0, sc=8) 00:37:58.705 Write completed with error (sct=0, sc=8) 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.705 starting I/O failed: -6 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.705 Write completed with error (sct=0, sc=8) 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.705 starting I/O failed: -6 00:37:58.705 Write completed with error (sct=0, sc=8) 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.705 starting I/O failed: -6 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.705 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 [2024-12-16 02:59:29.065730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e400 is same with the state(6) to be set 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 starting I/O failed: -6 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 [2024-12-16 02:59:29.066307] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7a60000c80 is same with the state(6) to be set 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Read completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:58.706 Write completed with error (sct=0, sc=8) 00:37:59.643 [2024-12-16 02:59:30.029650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187c190 is same with the state(6) to be set 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 [2024-12-16 02:59:30.069201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187df70 is same with the state(6) to be set 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 [2024-12-16 02:59:30.070229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7a6000d800 is same with the state(6) to be set 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 [2024-12-16 02:59:30.070436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187e5e0 is same with the state(6) to be set 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Write completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 Read completed with error (sct=0, sc=8) 00:37:59.643 [2024-12-16 02:59:30.071213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7a6000d060 is same with the state(6) to be set 00:37:59.643 Initializing NVMe Controllers 00:37:59.643 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:59.643 Controller IO queue size 128, less than required. 00:37:59.643 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:59.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:59.643 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:59.643 Initialization complete. Launching workers. 00:37:59.643 ======================================================== 00:37:59.643 Latency(us) 00:37:59.643 Device Information : IOPS MiB/s Average min max 00:37:59.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 167.25 0.08 901823.61 362.88 1042900.37 00:37:59.643 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.77 0.08 907331.79 248.65 1043010.24 00:37:59.643 ======================================================== 00:37:59.643 Total : 332.02 0.16 904557.12 248.65 1043010.24 00:37:59.643 00:37:59.643 [2024-12-16 02:59:30.071769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187c190 (9): Bad file descriptor 00:37:59.643 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:59.643 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:59.643 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:59.643 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1229116 00:37:59.643 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1229116 00:38:00.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1229116) - No such process 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1229116 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1229116 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1229116 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:00.212 [2024-12-16 02:59:30.603941] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1229576 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1229576 00:38:00.212 02:59:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:00.212 [2024-12-16 02:59:30.688871] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:00.471 02:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:00.471 02:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1229576 00:38:00.471 02:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:01.038 02:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:01.038 02:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1229576 00:38:01.038 02:59:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:01.606 02:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:01.606 02:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1229576 00:38:01.606 02:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:02.172 02:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:02.172 02:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1229576 00:38:02.172 02:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:02.740 02:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:02.740 02:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1229576 00:38:02.740 02:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:02.998 02:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:02.998 02:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1229576 00:38:02.998 02:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:03.257 Initializing NVMe Controllers 00:38:03.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:03.257 Controller IO queue size 128, less than required. 00:38:03.257 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:03.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:03.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:03.257 Initialization complete. Launching workers. 00:38:03.257 ======================================================== 00:38:03.257 Latency(us) 00:38:03.257 Device Information : IOPS MiB/s Average min max 00:38:03.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002594.12 1000164.58 1041332.62 00:38:03.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005144.44 1000261.47 1042683.10 00:38:03.257 ======================================================== 00:38:03.257 Total : 256.00 0.12 1003869.28 1000164.58 1042683.10 00:38:03.257 00:38:03.515 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:03.515 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1229576 00:38:03.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1229576) - No such process 00:38:03.515 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1229576 00:38:03.515 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:03.515 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:03.515 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:03.515 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:38:03.515 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:03.515 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:38:03.515 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:03.515 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:03.515 rmmod nvme_tcp 00:38:03.773 rmmod nvme_fabrics 00:38:03.773 rmmod nvme_keyring 00:38:03.773 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1228945 ']' 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1228945 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1228945 ']' 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1228945 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1228945 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1228945' 00:38:03.774 killing process with pid 1228945 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1228945 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1228945 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:03.774 02:59:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:06.308 00:38:06.308 real 0m16.094s 00:38:06.308 user 0m25.998s 00:38:06.308 sys 0m6.059s 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:06.308 ************************************ 00:38:06.308 END TEST nvmf_delete_subsystem 00:38:06.308 ************************************ 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:06.308 ************************************ 00:38:06.308 START TEST nvmf_host_management 00:38:06.308 ************************************ 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:06.308 * Looking for test storage... 00:38:06.308 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:06.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.308 --rc genhtml_branch_coverage=1 00:38:06.308 --rc genhtml_function_coverage=1 00:38:06.308 --rc genhtml_legend=1 00:38:06.308 --rc geninfo_all_blocks=1 00:38:06.308 --rc geninfo_unexecuted_blocks=1 00:38:06.308 00:38:06.308 ' 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:06.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.308 --rc genhtml_branch_coverage=1 00:38:06.308 --rc genhtml_function_coverage=1 00:38:06.308 --rc genhtml_legend=1 00:38:06.308 --rc geninfo_all_blocks=1 00:38:06.308 --rc geninfo_unexecuted_blocks=1 00:38:06.308 00:38:06.308 ' 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:06.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.308 --rc genhtml_branch_coverage=1 00:38:06.308 --rc genhtml_function_coverage=1 00:38:06.308 --rc genhtml_legend=1 00:38:06.308 --rc geninfo_all_blocks=1 00:38:06.308 --rc geninfo_unexecuted_blocks=1 00:38:06.308 00:38:06.308 ' 00:38:06.308 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:06.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:06.308 --rc genhtml_branch_coverage=1 00:38:06.309 --rc genhtml_function_coverage=1 00:38:06.309 --rc genhtml_legend=1 00:38:06.309 --rc geninfo_all_blocks=1 00:38:06.309 --rc geninfo_unexecuted_blocks=1 00:38:06.309 00:38:06.309 ' 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:38:06.309 02:59:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:12.883 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:12.883 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:12.884 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:12.884 Found net devices under 0000:af:00.0: cvl_0_0 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:12.884 Found net devices under 0000:af:00.1: cvl_0_1 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:12.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:12.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.403 ms 00:38:12.884 00:38:12.884 --- 10.0.0.2 ping statistics --- 00:38:12.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:12.884 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:12.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:12.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:38:12.884 00:38:12.884 --- 10.0.0.1 ping statistics --- 00:38:12.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:12.884 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1233682 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1233682 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1233682 ']' 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:12.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:12.884 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:12.884 [2024-12-16 02:59:42.710768] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:12.884 [2024-12-16 02:59:42.711743] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:12.884 [2024-12-16 02:59:42.711782] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:12.884 [2024-12-16 02:59:42.791251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:12.884 [2024-12-16 02:59:42.815057] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:12.884 [2024-12-16 02:59:42.815094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:12.884 [2024-12-16 02:59:42.815101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:12.884 [2024-12-16 02:59:42.815106] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:12.885 [2024-12-16 02:59:42.815111] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:12.885 [2024-12-16 02:59:42.816451] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:12.885 [2024-12-16 02:59:42.816542] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:12.885 [2024-12-16 02:59:42.816626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:12.885 [2024-12-16 02:59:42.816627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:12.885 [2024-12-16 02:59:42.880052] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:12.885 [2024-12-16 02:59:42.881282] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:12.885 [2024-12-16 02:59:42.881423] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:12.885 [2024-12-16 02:59:42.881786] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:12.885 [2024-12-16 02:59:42.881810] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:12.885 [2024-12-16 02:59:42.945441] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:12.885 02:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:12.885 Malloc0 00:38:12.885 [2024-12-16 02:59:43.033713] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1233745 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1233745 /var/tmp/bdevperf.sock 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1233745 ']' 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:12.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:12.885 { 00:38:12.885 "params": { 00:38:12.885 "name": "Nvme$subsystem", 00:38:12.885 "trtype": "$TEST_TRANSPORT", 00:38:12.885 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:12.885 "adrfam": "ipv4", 00:38:12.885 "trsvcid": "$NVMF_PORT", 00:38:12.885 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:12.885 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:12.885 "hdgst": ${hdgst:-false}, 00:38:12.885 "ddgst": ${ddgst:-false} 00:38:12.885 }, 00:38:12.885 "method": "bdev_nvme_attach_controller" 00:38:12.885 } 00:38:12.885 EOF 00:38:12.885 )") 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:12.885 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:12.885 "params": { 00:38:12.885 "name": "Nvme0", 00:38:12.885 "trtype": "tcp", 00:38:12.885 "traddr": "10.0.0.2", 00:38:12.885 "adrfam": "ipv4", 00:38:12.885 "trsvcid": "4420", 00:38:12.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:12.885 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:12.885 "hdgst": false, 00:38:12.885 "ddgst": false 00:38:12.885 }, 00:38:12.885 "method": "bdev_nvme_attach_controller" 00:38:12.885 }' 00:38:12.885 [2024-12-16 02:59:43.131283] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:12.885 [2024-12-16 02:59:43.131333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233745 ] 00:38:12.885 [2024-12-16 02:59:43.208081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:12.885 [2024-12-16 02:59:43.230401] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:12.885 Running I/O for 10 seconds... 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=103 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 103 -ge 100 ']' 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.145 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.145 [2024-12-16 02:59:43.649319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.145 [2024-12-16 02:59:43.649354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.145 [2024-12-16 02:59:43.649369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.145 [2024-12-16 02:59:43.649376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.145 [2024-12-16 02:59:43.649384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.145 [2024-12-16 02:59:43.649391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.145 [2024-12-16 02:59:43.649399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.145 [2024-12-16 02:59:43.649406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.145 [2024-12-16 02:59:43.649414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.145 [2024-12-16 02:59:43.649420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.145 [2024-12-16 02:59:43.649429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.145 [2024-12-16 02:59:43.649439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.145 [2024-12-16 02:59:43.649447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.145 [2024-12-16 02:59:43.649454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.145 [2024-12-16 02:59:43.649462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.145 [2024-12-16 02:59:43.649469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.145 [2024-12-16 02:59:43.649477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.145 [2024-12-16 02:59:43.649483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.145 [2024-12-16 02:59:43.649491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.145 [2024-12-16 02:59:43.649498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.145 [2024-12-16 02:59:43.649506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.649987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.649997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.650004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.650012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.650019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.650026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.650033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.650041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.650047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.650055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.650062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.650069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.650076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.650084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.650090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.146 [2024-12-16 02:59:43.650098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.146 [2024-12-16 02:59:43.650106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.650116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.147 [2024-12-16 02:59:43.650122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.650130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.147 [2024-12-16 02:59:43.650136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.650144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.147 [2024-12-16 02:59:43.650150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.650159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.147 [2024-12-16 02:59:43.650165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.650173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.147 [2024-12-16 02:59:43.650180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.650188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.147 [2024-12-16 02:59:43.650194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.650202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.147 [2024-12-16 02:59:43.650209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.650217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.147 [2024-12-16 02:59:43.650224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.650231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.147 [2024-12-16 02:59:43.650237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.650245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.147 [2024-12-16 02:59:43.650251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.650259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.147 [2024-12-16 02:59:43.650266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.650274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.147 [2024-12-16 02:59:43.650280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.650288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:13.147 [2024-12-16 02:59:43.650295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.651245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:38:13.147 task offset: 27392 on job bdev=Nvme0n1 fails 00:38:13.147 00:38:13.147 Latency(us) 00:38:13.147 [2024-12-16T01:59:43.806Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:13.147 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:13.147 Job: Nvme0n1 ended in about 0.11 seconds with error 00:38:13.147 Verification LBA range: start 0x0 length 0x400 00:38:13.147 Nvme0n1 : 0.11 1730.23 108.14 576.74 0.00 25618.18 1466.76 27088.21 00:38:13.147 [2024-12-16T01:59:43.806Z] =================================================================================================================== 00:38:13.147 [2024-12-16T01:59:43.806Z] Total : 1730.23 108.14 576.74 0.00 25618.18 1466.76 27088.21 00:38:13.147 [2024-12-16 02:59:43.653587] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:13.147 [2024-12-16 02:59:43.653609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a4490 (9): Bad file descriptor 00:38:13.147 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.147 [2024-12-16 02:59:43.654451] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:38:13.147 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:13.147 [2024-12-16 02:59:43.654566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:38:13.147 [2024-12-16 02:59:43.654589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:13.147 [2024-12-16 02:59:43.654605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:38:13.147 [2024-12-16 02:59:43.654613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:38:13.147 [2024-12-16 02:59:43.654620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:13.147 [2024-12-16 02:59:43.654626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x13a4490 00:38:13.147 [2024-12-16 02:59:43.654644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13a4490 (9): Bad file descriptor 00:38:13.147 [2024-12-16 02:59:43.654655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:38:13.147 [2024-12-16 02:59:43.654661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:38:13.147 [2024-12-16 02:59:43.654669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:38:13.147 [2024-12-16 02:59:43.654678] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:38:13.147 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:13.147 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:13.147 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:13.147 02:59:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:38:14.083 02:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1233745 00:38:14.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1233745) - No such process 00:38:14.083 02:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:38:14.083 02:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:38:14.083 02:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:38:14.083 02:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:38:14.083 02:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:14.083 02:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:14.083 02:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:14.083 02:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:14.083 { 00:38:14.083 "params": { 00:38:14.083 "name": "Nvme$subsystem", 00:38:14.083 "trtype": "$TEST_TRANSPORT", 00:38:14.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:14.083 "adrfam": "ipv4", 00:38:14.083 "trsvcid": "$NVMF_PORT", 00:38:14.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:14.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:14.083 "hdgst": ${hdgst:-false}, 00:38:14.083 "ddgst": ${ddgst:-false} 00:38:14.083 }, 00:38:14.083 "method": "bdev_nvme_attach_controller" 00:38:14.083 } 00:38:14.083 EOF 00:38:14.083 )") 00:38:14.083 02:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:14.083 02:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:14.083 02:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:14.083 02:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:14.083 "params": { 00:38:14.083 "name": "Nvme0", 00:38:14.083 "trtype": "tcp", 00:38:14.083 "traddr": "10.0.0.2", 00:38:14.083 "adrfam": "ipv4", 00:38:14.083 "trsvcid": "4420", 00:38:14.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:14.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:14.083 "hdgst": false, 00:38:14.083 "ddgst": false 00:38:14.083 }, 00:38:14.083 "method": "bdev_nvme_attach_controller" 00:38:14.083 }' 00:38:14.083 [2024-12-16 02:59:44.722247] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:14.083 [2024-12-16 02:59:44.722296] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233982 ] 00:38:14.341 [2024-12-16 02:59:44.797724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:14.341 [2024-12-16 02:59:44.818565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:14.341 Running I/O for 1 seconds... 00:38:15.722 1984.00 IOPS, 124.00 MiB/s 00:38:15.722 Latency(us) 00:38:15.722 [2024-12-16T01:59:46.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:15.722 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:15.722 Verification LBA range: start 0x0 length 0x400 00:38:15.722 Nvme0n1 : 1.01 2029.65 126.85 0.00 0.00 31042.51 4244.24 27088.21 00:38:15.722 [2024-12-16T01:59:46.381Z] =================================================================================================================== 00:38:15.722 [2024-12-16T01:59:46.381Z] Total : 2029.65 126.85 0.00 0.00 31042.51 4244.24 27088.21 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:15.722 rmmod nvme_tcp 00:38:15.722 rmmod nvme_fabrics 00:38:15.722 rmmod nvme_keyring 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1233682 ']' 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1233682 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1233682 ']' 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1233682 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1233682 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:15.722 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1233682' 00:38:15.723 killing process with pid 1233682 00:38:15.723 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1233682 00:38:15.723 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1233682 00:38:15.982 [2024-12-16 02:59:46.409085] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:38:15.982 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:15.982 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:15.982 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:15.982 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:38:15.982 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:38:15.982 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:15.982 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:38:15.982 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:15.982 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:15.982 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:15.982 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:15.982 02:59:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:17.887 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:17.887 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:38:17.887 00:38:17.887 real 0m11.939s 00:38:17.887 user 0m16.342s 00:38:17.887 sys 0m6.087s 00:38:17.887 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:17.887 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:17.887 ************************************ 00:38:17.887 END TEST nvmf_host_management 00:38:17.887 ************************************ 00:38:17.887 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:17.887 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:17.887 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:17.887 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:18.146 ************************************ 00:38:18.146 START TEST nvmf_lvol 00:38:18.146 ************************************ 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:18.146 * Looking for test storage... 00:38:18.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:18.146 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:18.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.147 --rc genhtml_branch_coverage=1 00:38:18.147 --rc genhtml_function_coverage=1 00:38:18.147 --rc genhtml_legend=1 00:38:18.147 --rc geninfo_all_blocks=1 00:38:18.147 --rc geninfo_unexecuted_blocks=1 00:38:18.147 00:38:18.147 ' 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:18.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.147 --rc genhtml_branch_coverage=1 00:38:18.147 --rc genhtml_function_coverage=1 00:38:18.147 --rc genhtml_legend=1 00:38:18.147 --rc geninfo_all_blocks=1 00:38:18.147 --rc geninfo_unexecuted_blocks=1 00:38:18.147 00:38:18.147 ' 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:18.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.147 --rc genhtml_branch_coverage=1 00:38:18.147 --rc genhtml_function_coverage=1 00:38:18.147 --rc genhtml_legend=1 00:38:18.147 --rc geninfo_all_blocks=1 00:38:18.147 --rc geninfo_unexecuted_blocks=1 00:38:18.147 00:38:18.147 ' 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:18.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:18.147 --rc genhtml_branch_coverage=1 00:38:18.147 --rc genhtml_function_coverage=1 00:38:18.147 --rc genhtml_legend=1 00:38:18.147 --rc geninfo_all_blocks=1 00:38:18.147 --rc geninfo_unexecuted_blocks=1 00:38:18.147 00:38:18.147 ' 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:38:18.147 02:59:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:24.715 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:24.715 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:24.715 Found net devices under 0000:af:00.0: cvl_0_0 00:38:24.715 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:24.716 Found net devices under 0000:af:00.1: cvl_0_1 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:24.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:24.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.349 ms 00:38:24.716 00:38:24.716 --- 10.0.0.2 ping statistics --- 00:38:24.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:24.716 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:24.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:24.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:38:24.716 00:38:24.716 --- 10.0.0.1 ping statistics --- 00:38:24.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:24.716 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1237671 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1237671 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1237671 ']' 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:24.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:24.716 [2024-12-16 02:59:54.703393] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:24.716 [2024-12-16 02:59:54.704342] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:24.716 [2024-12-16 02:59:54.704380] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:24.716 [2024-12-16 02:59:54.783468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:24.716 [2024-12-16 02:59:54.805953] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:24.716 [2024-12-16 02:59:54.805993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:24.716 [2024-12-16 02:59:54.806000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:24.716 [2024-12-16 02:59:54.806006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:24.716 [2024-12-16 02:59:54.806011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:24.716 [2024-12-16 02:59:54.807244] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:24.716 [2024-12-16 02:59:54.807353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:24.716 [2024-12-16 02:59:54.807354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:24.716 [2024-12-16 02:59:54.869725] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:24.716 [2024-12-16 02:59:54.870528] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:24.716 [2024-12-16 02:59:54.870937] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:24.716 [2024-12-16 02:59:54.871035] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:24.716 02:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:24.716 [2024-12-16 02:59:55.108062] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:24.716 02:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:25.017 02:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:38:25.017 02:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:25.017 02:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:38:25.017 02:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:38:25.314 02:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:38:25.572 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=d3138d83-fc08-4765-baf8-a95796df67e2 00:38:25.572 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d3138d83-fc08-4765-baf8-a95796df67e2 lvol 20 00:38:25.572 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=2211b6e1-11c5-45c6-a456-dc03cb6f36c2 00:38:25.572 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:25.831 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2211b6e1-11c5-45c6-a456-dc03cb6f36c2 00:38:26.089 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:26.089 [2024-12-16 02:59:56.727949] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:26.348 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:26.348 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1238056 00:38:26.348 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:38:26.348 02:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:38:27.724 02:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 2211b6e1-11c5-45c6-a456-dc03cb6f36c2 MY_SNAPSHOT 00:38:27.724 02:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=44fb9624-2156-471b-8872-1e60bf2cd106 00:38:27.724 02:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 2211b6e1-11c5-45c6-a456-dc03cb6f36c2 30 00:38:27.983 02:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 44fb9624-2156-471b-8872-1e60bf2cd106 MY_CLONE 00:38:28.242 02:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=01739937-2afa-456b-8ef5-ae5733bacd9d 00:38:28.242 02:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 01739937-2afa-456b-8ef5-ae5733bacd9d 00:38:28.501 02:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1238056 00:38:38.478 Initializing NVMe Controllers 00:38:38.478 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:38.478 Controller IO queue size 128, less than required. 00:38:38.478 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:38.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:38.478 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:38.478 Initialization complete. Launching workers. 00:38:38.478 ======================================================== 00:38:38.478 Latency(us) 00:38:38.478 Device Information : IOPS MiB/s Average min max 00:38:38.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12732.40 49.74 10055.95 2001.56 64504.06 00:38:38.478 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12512.40 48.88 10231.64 3384.34 64726.54 00:38:38.478 ======================================================== 00:38:38.478 Total : 25244.80 98.61 10143.03 2001.56 64726.54 00:38:38.478 00:38:38.478 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:38.478 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 2211b6e1-11c5-45c6-a456-dc03cb6f36c2 00:38:38.478 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d3138d83-fc08-4765-baf8-a95796df67e2 00:38:38.478 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:38.478 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:38.478 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:38.478 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:38.478 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:38.478 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:38.479 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:38.479 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:38.479 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:38.479 rmmod nvme_tcp 00:38:38.479 rmmod nvme_fabrics 00:38:38.479 rmmod nvme_keyring 00:38:38.479 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:38.479 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:38.479 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:38.479 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1237671 ']' 00:38:38.479 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1237671 00:38:38.479 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1237671 ']' 00:38:38.479 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1237671 00:38:38.479 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:38.479 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:38.479 03:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1237671 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1237671' 00:38:38.479 killing process with pid 1237671 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1237671 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1237671 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:38.479 03:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:39.858 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:39.858 00:38:39.858 real 0m21.724s 00:38:39.858 user 0m55.581s 00:38:39.858 sys 0m9.668s 00:38:39.858 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:39.858 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:39.858 ************************************ 00:38:39.858 END TEST nvmf_lvol 00:38:39.858 ************************************ 00:38:39.858 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:39.858 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:39.858 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:39.858 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:39.858 ************************************ 00:38:39.858 START TEST nvmf_lvs_grow 00:38:39.858 ************************************ 00:38:39.858 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:39.858 * Looking for test storage... 00:38:39.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:39.858 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:39.858 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:38:39.858 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:40.118 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:40.118 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:40.118 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:40.118 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:40.118 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:40.118 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:40.118 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:40.118 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:40.118 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:40.118 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:40.118 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:40.118 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:40.118 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:40.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.119 --rc genhtml_branch_coverage=1 00:38:40.119 --rc genhtml_function_coverage=1 00:38:40.119 --rc genhtml_legend=1 00:38:40.119 --rc geninfo_all_blocks=1 00:38:40.119 --rc geninfo_unexecuted_blocks=1 00:38:40.119 00:38:40.119 ' 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:40.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.119 --rc genhtml_branch_coverage=1 00:38:40.119 --rc genhtml_function_coverage=1 00:38:40.119 --rc genhtml_legend=1 00:38:40.119 --rc geninfo_all_blocks=1 00:38:40.119 --rc geninfo_unexecuted_blocks=1 00:38:40.119 00:38:40.119 ' 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:40.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.119 --rc genhtml_branch_coverage=1 00:38:40.119 --rc genhtml_function_coverage=1 00:38:40.119 --rc genhtml_legend=1 00:38:40.119 --rc geninfo_all_blocks=1 00:38:40.119 --rc geninfo_unexecuted_blocks=1 00:38:40.119 00:38:40.119 ' 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:40.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:40.119 --rc genhtml_branch_coverage=1 00:38:40.119 --rc genhtml_function_coverage=1 00:38:40.119 --rc genhtml_legend=1 00:38:40.119 --rc geninfo_all_blocks=1 00:38:40.119 --rc geninfo_unexecuted_blocks=1 00:38:40.119 00:38:40.119 ' 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:40.119 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:40.120 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:40.120 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:40.120 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:40.120 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:40.120 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:40.120 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:40.120 03:00:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:46.690 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:46.690 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:46.690 Found net devices under 0000:af:00.0: cvl_0_0 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:46.690 Found net devices under 0000:af:00.1: cvl_0_1 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:46.690 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:46.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:46.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:38:46.691 00:38:46.691 --- 10.0.0.2 ping statistics --- 00:38:46.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.691 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:46.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:46.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:38:46.691 00:38:46.691 --- 10.0.0.1 ping statistics --- 00:38:46.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:46.691 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1243694 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1243694 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1243694 ']' 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:46.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:46.691 [2024-12-16 03:00:16.500373] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:46.691 [2024-12-16 03:00:16.501286] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:46.691 [2024-12-16 03:00:16.501317] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:46.691 [2024-12-16 03:00:16.580246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:46.691 [2024-12-16 03:00:16.601799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:46.691 [2024-12-16 03:00:16.601834] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:46.691 [2024-12-16 03:00:16.601841] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:46.691 [2024-12-16 03:00:16.601852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:46.691 [2024-12-16 03:00:16.601857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:46.691 [2024-12-16 03:00:16.602326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:46.691 [2024-12-16 03:00:16.665772] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:46.691 [2024-12-16 03:00:16.665973] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:46.691 [2024-12-16 03:00:16.894993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:46.691 ************************************ 00:38:46.691 START TEST lvs_grow_clean 00:38:46.691 ************************************ 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:46.691 03:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:46.691 03:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:46.692 03:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:46.951 03:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=62b1d25e-5619-4cde-b0ba-0c11f6523413 00:38:46.951 03:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62b1d25e-5619-4cde-b0ba-0c11f6523413 00:38:46.951 03:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:46.951 03:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:46.951 03:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:46.951 03:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 62b1d25e-5619-4cde-b0ba-0c11f6523413 lvol 150 00:38:47.210 03:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7f2ea035-2dcc-4976-ba88-160e99304bb5 00:38:47.210 03:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:47.210 03:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:47.469 [2024-12-16 03:00:17.950699] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:47.469 [2024-12-16 03:00:17.950820] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:47.469 true 00:38:47.469 03:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62b1d25e-5619-4cde-b0ba-0c11f6523413 00:38:47.469 03:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:47.727 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:47.727 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:47.727 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7f2ea035-2dcc-4976-ba88-160e99304bb5 00:38:47.986 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:48.244 [2024-12-16 03:00:18.731273] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:48.244 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:48.503 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1244181 00:38:48.503 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:48.503 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:48.503 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1244181 /var/tmp/bdevperf.sock 00:38:48.503 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1244181 ']' 00:38:48.503 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:48.503 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:48.503 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:48.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:48.503 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:48.503 03:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:48.503 [2024-12-16 03:00:19.004614] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:48.503 [2024-12-16 03:00:19.004660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244181 ] 00:38:48.503 [2024-12-16 03:00:19.078928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:48.503 [2024-12-16 03:00:19.101363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:48.762 03:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:48.762 03:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:48.762 03:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:49.021 Nvme0n1 00:38:49.021 03:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:49.280 [ 00:38:49.280 { 00:38:49.280 "name": "Nvme0n1", 00:38:49.280 "aliases": [ 00:38:49.280 "7f2ea035-2dcc-4976-ba88-160e99304bb5" 00:38:49.280 ], 00:38:49.280 "product_name": "NVMe disk", 00:38:49.280 "block_size": 4096, 00:38:49.280 "num_blocks": 38912, 00:38:49.280 "uuid": "7f2ea035-2dcc-4976-ba88-160e99304bb5", 00:38:49.280 "numa_id": 1, 00:38:49.280 "assigned_rate_limits": { 00:38:49.280 "rw_ios_per_sec": 0, 00:38:49.280 "rw_mbytes_per_sec": 0, 00:38:49.280 "r_mbytes_per_sec": 0, 00:38:49.280 "w_mbytes_per_sec": 0 00:38:49.280 }, 00:38:49.280 "claimed": false, 00:38:49.280 "zoned": false, 00:38:49.280 "supported_io_types": { 00:38:49.280 "read": true, 00:38:49.280 "write": true, 00:38:49.280 "unmap": true, 00:38:49.280 "flush": true, 00:38:49.280 "reset": true, 00:38:49.280 "nvme_admin": true, 00:38:49.280 "nvme_io": true, 00:38:49.280 "nvme_io_md": false, 00:38:49.280 "write_zeroes": true, 00:38:49.280 "zcopy": false, 00:38:49.280 "get_zone_info": false, 00:38:49.280 "zone_management": false, 00:38:49.280 "zone_append": false, 00:38:49.280 "compare": true, 00:38:49.280 "compare_and_write": true, 00:38:49.280 "abort": true, 00:38:49.280 "seek_hole": false, 00:38:49.280 "seek_data": false, 00:38:49.280 "copy": true, 00:38:49.280 "nvme_iov_md": false 00:38:49.280 }, 00:38:49.280 "memory_domains": [ 00:38:49.280 { 00:38:49.280 "dma_device_id": "system", 00:38:49.280 "dma_device_type": 1 00:38:49.280 } 00:38:49.280 ], 00:38:49.280 "driver_specific": { 00:38:49.280 "nvme": [ 00:38:49.280 { 00:38:49.280 "trid": { 00:38:49.280 "trtype": "TCP", 00:38:49.280 "adrfam": "IPv4", 00:38:49.280 "traddr": "10.0.0.2", 00:38:49.280 "trsvcid": "4420", 00:38:49.280 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:49.280 }, 00:38:49.280 "ctrlr_data": { 00:38:49.280 "cntlid": 1, 00:38:49.280 "vendor_id": "0x8086", 00:38:49.280 "model_number": "SPDK bdev Controller", 00:38:49.280 "serial_number": "SPDK0", 00:38:49.280 "firmware_revision": "25.01", 00:38:49.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:49.280 "oacs": { 00:38:49.280 "security": 0, 00:38:49.280 "format": 0, 00:38:49.280 "firmware": 0, 00:38:49.280 "ns_manage": 0 00:38:49.280 }, 00:38:49.280 "multi_ctrlr": true, 00:38:49.280 "ana_reporting": false 00:38:49.280 }, 00:38:49.280 "vs": { 00:38:49.280 "nvme_version": "1.3" 00:38:49.280 }, 00:38:49.280 "ns_data": { 00:38:49.281 "id": 1, 00:38:49.281 "can_share": true 00:38:49.281 } 00:38:49.281 } 00:38:49.281 ], 00:38:49.281 "mp_policy": "active_passive" 00:38:49.281 } 00:38:49.281 } 00:38:49.281 ] 00:38:49.281 03:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1244201 00:38:49.281 03:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:49.281 03:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:49.281 Running I/O for 10 seconds... 00:38:50.218 Latency(us) 00:38:50.218 [2024-12-16T02:00:20.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:50.218 Nvme0n1 : 1.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:38:50.218 [2024-12-16T02:00:20.877Z] =================================================================================================================== 00:38:50.218 [2024-12-16T02:00:20.877Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:38:50.218 00:38:51.156 03:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 62b1d25e-5619-4cde-b0ba-0c11f6523413 00:38:51.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:51.415 Nvme0n1 : 2.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:38:51.415 [2024-12-16T02:00:22.074Z] =================================================================================================================== 00:38:51.415 [2024-12-16T02:00:22.074Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:38:51.415 00:38:51.415 true 00:38:51.415 03:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62b1d25e-5619-4cde-b0ba-0c11f6523413 00:38:51.415 03:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:51.674 03:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:51.674 03:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:51.674 03:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1244201 00:38:52.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:52.243 Nvme0n1 : 3.00 23156.33 90.45 0.00 0.00 0.00 0.00 0.00 00:38:52.243 [2024-12-16T02:00:22.902Z] =================================================================================================================== 00:38:52.243 [2024-12-16T02:00:22.902Z] Total : 23156.33 90.45 0.00 0.00 0.00 0.00 0.00 00:38:52.243 00:38:53.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:53.621 Nvme0n1 : 4.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:38:53.621 [2024-12-16T02:00:24.280Z] =================================================================================================================== 00:38:53.621 [2024-12-16T02:00:24.280Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:38:53.621 00:38:54.558 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:54.558 Nvme0n1 : 5.00 23393.40 91.38 0.00 0.00 0.00 0.00 0.00 00:38:54.558 [2024-12-16T02:00:25.217Z] =================================================================================================================== 00:38:54.558 [2024-12-16T02:00:25.217Z] Total : 23393.40 91.38 0.00 0.00 0.00 0.00 0.00 00:38:54.558 00:38:55.493 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:55.493 Nvme0n1 : 6.00 23452.67 91.61 0.00 0.00 0.00 0.00 0.00 00:38:55.493 [2024-12-16T02:00:26.152Z] =================================================================================================================== 00:38:55.493 [2024-12-16T02:00:26.152Z] Total : 23452.67 91.61 0.00 0.00 0.00 0.00 0.00 00:38:55.493 00:38:56.429 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:56.429 Nvme0n1 : 7.00 23513.14 91.85 0.00 0.00 0.00 0.00 0.00 00:38:56.429 [2024-12-16T02:00:27.088Z] =================================================================================================================== 00:38:56.429 [2024-12-16T02:00:27.088Z] Total : 23513.14 91.85 0.00 0.00 0.00 0.00 0.00 00:38:56.429 00:38:57.366 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:57.366 Nvme0n1 : 8.00 23542.62 91.96 0.00 0.00 0.00 0.00 0.00 00:38:57.366 [2024-12-16T02:00:28.025Z] =================================================================================================================== 00:38:57.366 [2024-12-16T02:00:28.025Z] Total : 23542.62 91.96 0.00 0.00 0.00 0.00 0.00 00:38:57.366 00:38:58.303 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:58.303 Nvme0n1 : 9.00 23579.67 92.11 0.00 0.00 0.00 0.00 0.00 00:38:58.303 [2024-12-16T02:00:28.962Z] =================================================================================================================== 00:38:58.303 [2024-12-16T02:00:28.962Z] Total : 23579.67 92.11 0.00 0.00 0.00 0.00 0.00 00:38:58.303 00:38:59.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:59.240 Nvme0n1 : 10.00 23609.30 92.22 0.00 0.00 0.00 0.00 0.00 00:38:59.240 [2024-12-16T02:00:29.899Z] =================================================================================================================== 00:38:59.240 [2024-12-16T02:00:29.899Z] Total : 23609.30 92.22 0.00 0.00 0.00 0.00 0.00 00:38:59.240 00:38:59.240 00:38:59.240 Latency(us) 00:38:59.240 [2024-12-16T02:00:29.899Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:59.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:59.240 Nvme0n1 : 10.00 23607.16 92.22 0.00 0.00 5418.77 3120.76 27587.54 00:38:59.240 [2024-12-16T02:00:29.899Z] =================================================================================================================== 00:38:59.240 [2024-12-16T02:00:29.899Z] Total : 23607.16 92.22 0.00 0.00 5418.77 3120.76 27587.54 00:38:59.240 { 00:38:59.240 "results": [ 00:38:59.240 { 00:38:59.240 "job": "Nvme0n1", 00:38:59.240 "core_mask": "0x2", 00:38:59.240 "workload": "randwrite", 00:38:59.240 "status": "finished", 00:38:59.240 "queue_depth": 128, 00:38:59.240 "io_size": 4096, 00:38:59.240 "runtime": 10.003658, 00:38:59.240 "iops": 23607.164499226183, 00:38:59.240 "mibps": 92.21548632510228, 00:38:59.240 "io_failed": 0, 00:38:59.240 "io_timeout": 0, 00:38:59.240 "avg_latency_us": 5418.767265789368, 00:38:59.240 "min_latency_us": 3120.7619047619046, 00:38:59.240 "max_latency_us": 27587.53523809524 00:38:59.240 } 00:38:59.240 ], 00:38:59.240 "core_count": 1 00:38:59.240 } 00:38:59.240 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1244181 00:38:59.240 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1244181 ']' 00:38:59.240 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1244181 00:38:59.240 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:59.240 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:59.240 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1244181 00:38:59.500 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:59.500 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:59.500 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1244181' 00:38:59.500 killing process with pid 1244181 00:38:59.500 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1244181 00:38:59.500 Received shutdown signal, test time was about 10.000000 seconds 00:38:59.500 00:38:59.500 Latency(us) 00:38:59.500 [2024-12-16T02:00:30.159Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:59.500 [2024-12-16T02:00:30.159Z] =================================================================================================================== 00:38:59.500 [2024-12-16T02:00:30.159Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:59.500 03:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1244181 00:38:59.500 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:59.759 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:00.018 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62b1d25e-5619-4cde-b0ba-0c11f6523413 00:39:00.018 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:00.277 [2024-12-16 03:00:30.862767] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62b1d25e-5619-4cde-b0ba-0c11f6523413 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62b1d25e-5619-4cde-b0ba-0c11f6523413 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:00.277 03:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62b1d25e-5619-4cde-b0ba-0c11f6523413 00:39:00.536 request: 00:39:00.536 { 00:39:00.536 "uuid": "62b1d25e-5619-4cde-b0ba-0c11f6523413", 00:39:00.536 "method": "bdev_lvol_get_lvstores", 00:39:00.536 "req_id": 1 00:39:00.536 } 00:39:00.536 Got JSON-RPC error response 00:39:00.536 response: 00:39:00.536 { 00:39:00.536 "code": -19, 00:39:00.536 "message": "No such device" 00:39:00.536 } 00:39:00.536 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:39:00.536 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:00.536 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:00.536 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:00.536 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:00.794 aio_bdev 00:39:00.794 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7f2ea035-2dcc-4976-ba88-160e99304bb5 00:39:00.794 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7f2ea035-2dcc-4976-ba88-160e99304bb5 00:39:00.794 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:00.795 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:39:00.795 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:00.795 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:00.795 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:01.054 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7f2ea035-2dcc-4976-ba88-160e99304bb5 -t 2000 00:39:01.054 [ 00:39:01.054 { 00:39:01.054 "name": "7f2ea035-2dcc-4976-ba88-160e99304bb5", 00:39:01.054 "aliases": [ 00:39:01.054 "lvs/lvol" 00:39:01.054 ], 00:39:01.054 "product_name": "Logical Volume", 00:39:01.054 "block_size": 4096, 00:39:01.054 "num_blocks": 38912, 00:39:01.054 "uuid": "7f2ea035-2dcc-4976-ba88-160e99304bb5", 00:39:01.054 "assigned_rate_limits": { 00:39:01.054 "rw_ios_per_sec": 0, 00:39:01.054 "rw_mbytes_per_sec": 0, 00:39:01.054 "r_mbytes_per_sec": 0, 00:39:01.054 "w_mbytes_per_sec": 0 00:39:01.054 }, 00:39:01.054 "claimed": false, 00:39:01.054 "zoned": false, 00:39:01.054 "supported_io_types": { 00:39:01.054 "read": true, 00:39:01.054 "write": true, 00:39:01.054 "unmap": true, 00:39:01.054 "flush": false, 00:39:01.054 "reset": true, 00:39:01.054 "nvme_admin": false, 00:39:01.054 "nvme_io": false, 00:39:01.054 "nvme_io_md": false, 00:39:01.054 "write_zeroes": true, 00:39:01.054 "zcopy": false, 00:39:01.054 "get_zone_info": false, 00:39:01.054 "zone_management": false, 00:39:01.054 "zone_append": false, 00:39:01.054 "compare": false, 00:39:01.054 "compare_and_write": false, 00:39:01.054 "abort": false, 00:39:01.054 "seek_hole": true, 00:39:01.054 "seek_data": true, 00:39:01.054 "copy": false, 00:39:01.054 "nvme_iov_md": false 00:39:01.054 }, 00:39:01.054 "driver_specific": { 00:39:01.054 "lvol": { 00:39:01.054 "lvol_store_uuid": "62b1d25e-5619-4cde-b0ba-0c11f6523413", 00:39:01.054 "base_bdev": "aio_bdev", 00:39:01.054 "thin_provision": false, 00:39:01.054 "num_allocated_clusters": 38, 00:39:01.054 "snapshot": false, 00:39:01.054 "clone": false, 00:39:01.054 "esnap_clone": false 00:39:01.054 } 00:39:01.054 } 00:39:01.054 } 00:39:01.054 ] 00:39:01.054 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:39:01.054 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62b1d25e-5619-4cde-b0ba-0c11f6523413 00:39:01.054 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:01.313 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:01.313 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62b1d25e-5619-4cde-b0ba-0c11f6523413 00:39:01.313 03:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:01.572 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:01.572 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7f2ea035-2dcc-4976-ba88-160e99304bb5 00:39:01.831 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 62b1d25e-5619-4cde-b0ba-0c11f6523413 00:39:01.831 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:02.090 00:39:02.090 real 0m15.693s 00:39:02.090 user 0m15.235s 00:39:02.090 sys 0m1.490s 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:02.090 ************************************ 00:39:02.090 END TEST lvs_grow_clean 00:39:02.090 ************************************ 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:02.090 ************************************ 00:39:02.090 START TEST lvs_grow_dirty 00:39:02.090 ************************************ 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:02.090 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:02.349 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:02.349 03:00:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:02.608 03:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:02.608 03:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:02.608 03:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:02.866 03:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:02.866 03:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:02.866 03:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 906f7868-e329-4f19-9b40-09704c3cfcc7 lvol 150 00:39:03.125 03:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=d087cd99-3fba-4496-805c-f3b7013b2505 00:39:03.125 03:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:03.125 03:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:03.125 [2024-12-16 03:00:33.726733] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:03.125 [2024-12-16 03:00:33.726894] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:03.125 true 00:39:03.125 03:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:03.125 03:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:03.385 03:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:03.385 03:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:03.644 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d087cd99-3fba-4496-805c-f3b7013b2505 00:39:03.644 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:03.903 [2024-12-16 03:00:34.467153] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:03.903 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:04.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1246688 00:39:04.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:04.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:04.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1246688 /var/tmp/bdevperf.sock 00:39:04.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1246688 ']' 00:39:04.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:04.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:04.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:04.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:04.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:04.163 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:04.163 [2024-12-16 03:00:34.715833] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:04.163 [2024-12-16 03:00:34.715884] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1246688 ] 00:39:04.163 [2024-12-16 03:00:34.786906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:04.163 [2024-12-16 03:00:34.809372] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:04.421 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:04.421 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:04.422 03:00:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:04.680 Nvme0n1 00:39:04.680 03:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:04.938 [ 00:39:04.938 { 00:39:04.938 "name": "Nvme0n1", 00:39:04.938 "aliases": [ 00:39:04.938 "d087cd99-3fba-4496-805c-f3b7013b2505" 00:39:04.938 ], 00:39:04.938 "product_name": "NVMe disk", 00:39:04.938 "block_size": 4096, 00:39:04.938 "num_blocks": 38912, 00:39:04.938 "uuid": "d087cd99-3fba-4496-805c-f3b7013b2505", 00:39:04.938 "numa_id": 1, 00:39:04.938 "assigned_rate_limits": { 00:39:04.938 "rw_ios_per_sec": 0, 00:39:04.938 "rw_mbytes_per_sec": 0, 00:39:04.938 "r_mbytes_per_sec": 0, 00:39:04.938 "w_mbytes_per_sec": 0 00:39:04.938 }, 00:39:04.938 "claimed": false, 00:39:04.938 "zoned": false, 00:39:04.938 "supported_io_types": { 00:39:04.938 "read": true, 00:39:04.938 "write": true, 00:39:04.938 "unmap": true, 00:39:04.938 "flush": true, 00:39:04.938 "reset": true, 00:39:04.938 "nvme_admin": true, 00:39:04.938 "nvme_io": true, 00:39:04.938 "nvme_io_md": false, 00:39:04.938 "write_zeroes": true, 00:39:04.938 "zcopy": false, 00:39:04.938 "get_zone_info": false, 00:39:04.938 "zone_management": false, 00:39:04.938 "zone_append": false, 00:39:04.938 "compare": true, 00:39:04.938 "compare_and_write": true, 00:39:04.938 "abort": true, 00:39:04.938 "seek_hole": false, 00:39:04.938 "seek_data": false, 00:39:04.938 "copy": true, 00:39:04.938 "nvme_iov_md": false 00:39:04.938 }, 00:39:04.938 "memory_domains": [ 00:39:04.938 { 00:39:04.938 "dma_device_id": "system", 00:39:04.938 "dma_device_type": 1 00:39:04.938 } 00:39:04.938 ], 00:39:04.938 "driver_specific": { 00:39:04.938 "nvme": [ 00:39:04.938 { 00:39:04.938 "trid": { 00:39:04.938 "trtype": "TCP", 00:39:04.938 "adrfam": "IPv4", 00:39:04.938 "traddr": "10.0.0.2", 00:39:04.938 "trsvcid": "4420", 00:39:04.938 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:04.938 }, 00:39:04.938 "ctrlr_data": { 00:39:04.938 "cntlid": 1, 00:39:04.938 "vendor_id": "0x8086", 00:39:04.938 "model_number": "SPDK bdev Controller", 00:39:04.938 "serial_number": "SPDK0", 00:39:04.938 "firmware_revision": "25.01", 00:39:04.938 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:04.938 "oacs": { 00:39:04.938 "security": 0, 00:39:04.938 "format": 0, 00:39:04.938 "firmware": 0, 00:39:04.938 "ns_manage": 0 00:39:04.938 }, 00:39:04.938 "multi_ctrlr": true, 00:39:04.938 "ana_reporting": false 00:39:04.938 }, 00:39:04.938 "vs": { 00:39:04.938 "nvme_version": "1.3" 00:39:04.938 }, 00:39:04.938 "ns_data": { 00:39:04.938 "id": 1, 00:39:04.938 "can_share": true 00:39:04.938 } 00:39:04.938 } 00:39:04.938 ], 00:39:04.938 "mp_policy": "active_passive" 00:39:04.938 } 00:39:04.938 } 00:39:04.938 ] 00:39:04.938 03:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1246705 00:39:04.938 03:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:04.938 03:00:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:04.938 Running I/O for 10 seconds... 00:39:05.874 Latency(us) 00:39:05.874 [2024-12-16T02:00:36.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:05.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:05.874 Nvme0n1 : 1.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:39:05.874 [2024-12-16T02:00:36.533Z] =================================================================================================================== 00:39:05.874 [2024-12-16T02:00:36.533Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:39:05.874 00:39:06.812 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:07.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:07.070 Nvme0n1 : 2.00 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:39:07.070 [2024-12-16T02:00:37.729Z] =================================================================================================================== 00:39:07.070 [2024-12-16T02:00:37.729Z] Total : 23304.50 91.03 0.00 0.00 0.00 0.00 0.00 00:39:07.070 00:39:07.070 true 00:39:07.070 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:07.070 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:07.330 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:07.330 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:07.330 03:00:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1246705 00:39:07.897 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:07.897 Nvme0n1 : 3.00 23410.33 91.45 0.00 0.00 0.00 0.00 0.00 00:39:07.897 [2024-12-16T02:00:38.556Z] =================================================================================================================== 00:39:07.897 [2024-12-16T02:00:38.556Z] Total : 23410.33 91.45 0.00 0.00 0.00 0.00 0.00 00:39:07.897 00:39:09.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:09.274 Nvme0n1 : 4.00 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:39:09.274 [2024-12-16T02:00:39.933Z] =================================================================================================================== 00:39:09.274 [2024-12-16T02:00:39.933Z] Total : 23495.00 91.78 0.00 0.00 0.00 0.00 0.00 00:39:09.274 00:39:10.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:10.211 Nvme0n1 : 5.00 23469.60 91.68 0.00 0.00 0.00 0.00 0.00 00:39:10.211 [2024-12-16T02:00:40.870Z] =================================================================================================================== 00:39:10.211 [2024-12-16T02:00:40.870Z] Total : 23469.60 91.68 0.00 0.00 0.00 0.00 0.00 00:39:10.211 00:39:11.146 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:11.146 Nvme0n1 : 6.00 23473.83 91.69 0.00 0.00 0.00 0.00 0.00 00:39:11.146 [2024-12-16T02:00:41.805Z] =================================================================================================================== 00:39:11.147 [2024-12-16T02:00:41.806Z] Total : 23473.83 91.69 0.00 0.00 0.00 0.00 0.00 00:39:11.147 00:39:12.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:12.083 Nvme0n1 : 7.00 23531.29 91.92 0.00 0.00 0.00 0.00 0.00 00:39:12.083 [2024-12-16T02:00:42.742Z] =================================================================================================================== 00:39:12.083 [2024-12-16T02:00:42.742Z] Total : 23531.29 91.92 0.00 0.00 0.00 0.00 0.00 00:39:12.083 00:39:13.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:13.019 Nvme0n1 : 8.00 23542.62 91.96 0.00 0.00 0.00 0.00 0.00 00:39:13.019 [2024-12-16T02:00:43.678Z] =================================================================================================================== 00:39:13.019 [2024-12-16T02:00:43.679Z] Total : 23542.62 91.96 0.00 0.00 0.00 0.00 0.00 00:39:13.020 00:39:13.956 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:13.956 Nvme0n1 : 9.00 23579.67 92.11 0.00 0.00 0.00 0.00 0.00 00:39:13.956 [2024-12-16T02:00:44.615Z] =================================================================================================================== 00:39:13.956 [2024-12-16T02:00:44.615Z] Total : 23579.67 92.11 0.00 0.00 0.00 0.00 0.00 00:39:13.956 00:39:14.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:14.896 Nvme0n1 : 10.00 23609.30 92.22 0.00 0.00 0.00 0.00 0.00 00:39:14.896 [2024-12-16T02:00:45.555Z] =================================================================================================================== 00:39:14.896 [2024-12-16T02:00:45.555Z] Total : 23609.30 92.22 0.00 0.00 0.00 0.00 0.00 00:39:14.896 00:39:14.896 00:39:14.896 Latency(us) 00:39:14.896 [2024-12-16T02:00:45.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:14.896 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:14.896 Nvme0n1 : 10.00 23616.84 92.25 0.00 0.00 5417.08 4681.14 27962.03 00:39:14.896 [2024-12-16T02:00:45.555Z] =================================================================================================================== 00:39:14.896 [2024-12-16T02:00:45.555Z] Total : 23616.84 92.25 0.00 0.00 5417.08 4681.14 27962.03 00:39:14.896 { 00:39:14.896 "results": [ 00:39:14.896 { 00:39:14.896 "job": "Nvme0n1", 00:39:14.896 "core_mask": "0x2", 00:39:14.896 "workload": "randwrite", 00:39:14.896 "status": "finished", 00:39:14.896 "queue_depth": 128, 00:39:14.896 "io_size": 4096, 00:39:14.896 "runtime": 10.002226, 00:39:14.896 "iops": 23616.842890772514, 00:39:14.896 "mibps": 92.25329254208013, 00:39:14.896 "io_failed": 0, 00:39:14.896 "io_timeout": 0, 00:39:14.896 "avg_latency_us": 5417.077742719137, 00:39:14.896 "min_latency_us": 4681.142857142857, 00:39:14.896 "max_latency_us": 27962.02666666667 00:39:14.896 } 00:39:14.896 ], 00:39:14.896 "core_count": 1 00:39:14.896 } 00:39:15.154 03:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1246688 00:39:15.154 03:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1246688 ']' 00:39:15.154 03:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1246688 00:39:15.154 03:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:39:15.154 03:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:15.154 03:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1246688 00:39:15.154 03:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:15.154 03:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:15.154 03:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1246688' 00:39:15.154 killing process with pid 1246688 00:39:15.154 03:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1246688 00:39:15.154 Received shutdown signal, test time was about 10.000000 seconds 00:39:15.154 00:39:15.154 Latency(us) 00:39:15.154 [2024-12-16T02:00:45.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:15.154 [2024-12-16T02:00:45.813Z] =================================================================================================================== 00:39:15.154 [2024-12-16T02:00:45.813Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:15.154 03:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1246688 00:39:15.154 03:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:15.412 03:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:15.671 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:15.671 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1243694 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1243694 00:39:15.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1243694 Killed "${NVMF_APP[@]}" "$@" 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1248487 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1248487 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1248487 ']' 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:15.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:15.930 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:15.930 [2024-12-16 03:00:46.451370] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:15.930 [2024-12-16 03:00:46.452270] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:15.930 [2024-12-16 03:00:46.452303] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:15.930 [2024-12-16 03:00:46.530358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:15.930 [2024-12-16 03:00:46.551491] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:15.930 [2024-12-16 03:00:46.551527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:15.930 [2024-12-16 03:00:46.551533] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:15.930 [2024-12-16 03:00:46.551539] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:15.930 [2024-12-16 03:00:46.551543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:15.930 [2024-12-16 03:00:46.552014] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:16.189 [2024-12-16 03:00:46.615101] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:16.189 [2024-12-16 03:00:46.615302] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:16.189 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:16.189 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:16.189 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:16.189 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:16.189 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:16.189 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:16.189 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:16.448 [2024-12-16 03:00:46.853367] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:39:16.448 [2024-12-16 03:00:46.853566] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:39:16.448 [2024-12-16 03:00:46.853651] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:39:16.448 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:39:16.448 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev d087cd99-3fba-4496-805c-f3b7013b2505 00:39:16.448 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d087cd99-3fba-4496-805c-f3b7013b2505 00:39:16.448 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:16.448 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:16.448 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:16.448 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:16.448 03:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:16.448 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d087cd99-3fba-4496-805c-f3b7013b2505 -t 2000 00:39:16.760 [ 00:39:16.760 { 00:39:16.760 "name": "d087cd99-3fba-4496-805c-f3b7013b2505", 00:39:16.760 "aliases": [ 00:39:16.760 "lvs/lvol" 00:39:16.760 ], 00:39:16.760 "product_name": "Logical Volume", 00:39:16.760 "block_size": 4096, 00:39:16.760 "num_blocks": 38912, 00:39:16.760 "uuid": "d087cd99-3fba-4496-805c-f3b7013b2505", 00:39:16.760 "assigned_rate_limits": { 00:39:16.760 "rw_ios_per_sec": 0, 00:39:16.760 "rw_mbytes_per_sec": 0, 00:39:16.760 "r_mbytes_per_sec": 0, 00:39:16.760 "w_mbytes_per_sec": 0 00:39:16.760 }, 00:39:16.760 "claimed": false, 00:39:16.760 "zoned": false, 00:39:16.760 "supported_io_types": { 00:39:16.760 "read": true, 00:39:16.760 "write": true, 00:39:16.760 "unmap": true, 00:39:16.760 "flush": false, 00:39:16.760 "reset": true, 00:39:16.760 "nvme_admin": false, 00:39:16.760 "nvme_io": false, 00:39:16.760 "nvme_io_md": false, 00:39:16.760 "write_zeroes": true, 00:39:16.760 "zcopy": false, 00:39:16.760 "get_zone_info": false, 00:39:16.760 "zone_management": false, 00:39:16.760 "zone_append": false, 00:39:16.760 "compare": false, 00:39:16.760 "compare_and_write": false, 00:39:16.760 "abort": false, 00:39:16.760 "seek_hole": true, 00:39:16.760 "seek_data": true, 00:39:16.760 "copy": false, 00:39:16.760 "nvme_iov_md": false 00:39:16.760 }, 00:39:16.760 "driver_specific": { 00:39:16.760 "lvol": { 00:39:16.760 "lvol_store_uuid": "906f7868-e329-4f19-9b40-09704c3cfcc7", 00:39:16.760 "base_bdev": "aio_bdev", 00:39:16.760 "thin_provision": false, 00:39:16.760 "num_allocated_clusters": 38, 00:39:16.760 "snapshot": false, 00:39:16.760 "clone": false, 00:39:16.760 "esnap_clone": false 00:39:16.760 } 00:39:16.760 } 00:39:16.760 } 00:39:16.760 ] 00:39:16.760 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:16.760 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:16.760 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:39:17.115 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:39:17.115 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:17.115 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:39:17.115 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:39:17.115 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:17.375 [2024-12-16 03:00:47.792464] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:17.375 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:17.375 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:39:17.375 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:17.375 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:17.375 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:17.375 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:17.375 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:17.375 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:17.375 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:17.375 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:17.375 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:17.375 03:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:17.375 request: 00:39:17.375 { 00:39:17.375 "uuid": "906f7868-e329-4f19-9b40-09704c3cfcc7", 00:39:17.375 "method": "bdev_lvol_get_lvstores", 00:39:17.375 "req_id": 1 00:39:17.375 } 00:39:17.375 Got JSON-RPC error response 00:39:17.375 response: 00:39:17.375 { 00:39:17.375 "code": -19, 00:39:17.375 "message": "No such device" 00:39:17.375 } 00:39:17.375 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:39:17.375 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:17.375 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:17.375 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:17.375 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:17.634 aio_bdev 00:39:17.634 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d087cd99-3fba-4496-805c-f3b7013b2505 00:39:17.634 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=d087cd99-3fba-4496-805c-f3b7013b2505 00:39:17.634 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:17.634 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:17.634 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:17.634 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:17.634 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:17.892 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d087cd99-3fba-4496-805c-f3b7013b2505 -t 2000 00:39:18.151 [ 00:39:18.151 { 00:39:18.151 "name": "d087cd99-3fba-4496-805c-f3b7013b2505", 00:39:18.151 "aliases": [ 00:39:18.151 "lvs/lvol" 00:39:18.151 ], 00:39:18.151 "product_name": "Logical Volume", 00:39:18.151 "block_size": 4096, 00:39:18.151 "num_blocks": 38912, 00:39:18.151 "uuid": "d087cd99-3fba-4496-805c-f3b7013b2505", 00:39:18.151 "assigned_rate_limits": { 00:39:18.151 "rw_ios_per_sec": 0, 00:39:18.151 "rw_mbytes_per_sec": 0, 00:39:18.151 "r_mbytes_per_sec": 0, 00:39:18.151 "w_mbytes_per_sec": 0 00:39:18.151 }, 00:39:18.151 "claimed": false, 00:39:18.151 "zoned": false, 00:39:18.151 "supported_io_types": { 00:39:18.151 "read": true, 00:39:18.151 "write": true, 00:39:18.151 "unmap": true, 00:39:18.151 "flush": false, 00:39:18.151 "reset": true, 00:39:18.151 "nvme_admin": false, 00:39:18.151 "nvme_io": false, 00:39:18.151 "nvme_io_md": false, 00:39:18.151 "write_zeroes": true, 00:39:18.151 "zcopy": false, 00:39:18.151 "get_zone_info": false, 00:39:18.151 "zone_management": false, 00:39:18.151 "zone_append": false, 00:39:18.151 "compare": false, 00:39:18.151 "compare_and_write": false, 00:39:18.151 "abort": false, 00:39:18.151 "seek_hole": true, 00:39:18.151 "seek_data": true, 00:39:18.151 "copy": false, 00:39:18.151 "nvme_iov_md": false 00:39:18.151 }, 00:39:18.151 "driver_specific": { 00:39:18.151 "lvol": { 00:39:18.151 "lvol_store_uuid": "906f7868-e329-4f19-9b40-09704c3cfcc7", 00:39:18.151 "base_bdev": "aio_bdev", 00:39:18.151 "thin_provision": false, 00:39:18.151 "num_allocated_clusters": 38, 00:39:18.151 "snapshot": false, 00:39:18.151 "clone": false, 00:39:18.151 "esnap_clone": false 00:39:18.151 } 00:39:18.151 } 00:39:18.151 } 00:39:18.151 ] 00:39:18.151 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:18.151 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:18.151 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:18.151 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:18.151 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:18.151 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:18.410 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:18.410 03:00:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d087cd99-3fba-4496-805c-f3b7013b2505 00:39:18.669 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 906f7868-e329-4f19-9b40-09704c3cfcc7 00:39:18.928 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:18.928 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:18.928 00:39:18.928 real 0m16.816s 00:39:18.928 user 0m34.212s 00:39:18.928 sys 0m3.870s 00:39:18.928 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:18.928 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:18.928 ************************************ 00:39:18.928 END TEST lvs_grow_dirty 00:39:18.928 ************************************ 00:39:18.928 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:39:18.928 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:39:18.928 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:39:18.928 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:39:18.928 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:39:18.928 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:39:18.928 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:39:18.928 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:39:18.928 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:39:18.928 nvmf_trace.0 00:39:19.187 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:39:19.187 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:39:19.187 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:19.187 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:39:19.187 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:19.187 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:39:19.187 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:19.188 rmmod nvme_tcp 00:39:19.188 rmmod nvme_fabrics 00:39:19.188 rmmod nvme_keyring 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1248487 ']' 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1248487 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1248487 ']' 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1248487 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1248487 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1248487' 00:39:19.188 killing process with pid 1248487 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1248487 00:39:19.188 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1248487 00:39:19.446 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:19.446 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:19.446 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:19.446 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:39:19.446 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:39:19.446 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:19.446 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:39:19.446 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:19.446 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:19.446 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:19.446 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:19.446 03:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.351 03:00:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:21.351 00:39:21.351 real 0m41.629s 00:39:21.351 user 0m51.876s 00:39:21.351 sys 0m10.257s 00:39:21.351 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:21.351 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:21.351 ************************************ 00:39:21.351 END TEST nvmf_lvs_grow 00:39:21.351 ************************************ 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:21.611 ************************************ 00:39:21.611 START TEST nvmf_bdev_io_wait 00:39:21.611 ************************************ 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:21.611 * Looking for test storage... 00:39:21.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:21.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.611 --rc genhtml_branch_coverage=1 00:39:21.611 --rc genhtml_function_coverage=1 00:39:21.611 --rc genhtml_legend=1 00:39:21.611 --rc geninfo_all_blocks=1 00:39:21.611 --rc geninfo_unexecuted_blocks=1 00:39:21.611 00:39:21.611 ' 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:21.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.611 --rc genhtml_branch_coverage=1 00:39:21.611 --rc genhtml_function_coverage=1 00:39:21.611 --rc genhtml_legend=1 00:39:21.611 --rc geninfo_all_blocks=1 00:39:21.611 --rc geninfo_unexecuted_blocks=1 00:39:21.611 00:39:21.611 ' 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:21.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.611 --rc genhtml_branch_coverage=1 00:39:21.611 --rc genhtml_function_coverage=1 00:39:21.611 --rc genhtml_legend=1 00:39:21.611 --rc geninfo_all_blocks=1 00:39:21.611 --rc geninfo_unexecuted_blocks=1 00:39:21.611 00:39:21.611 ' 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:21.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:21.611 --rc genhtml_branch_coverage=1 00:39:21.611 --rc genhtml_function_coverage=1 00:39:21.611 --rc genhtml_legend=1 00:39:21.611 --rc geninfo_all_blocks=1 00:39:21.611 --rc geninfo_unexecuted_blocks=1 00:39:21.611 00:39:21.611 ' 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:21.611 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:21.870 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:39:21.871 03:00:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:28.440 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:28.440 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:28.440 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:28.441 Found net devices under 0000:af:00.0: cvl_0_0 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:28.441 Found net devices under 0000:af:00.1: cvl_0_1 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:28.441 03:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:28.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:28.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:39:28.441 00:39:28.441 --- 10.0.0.2 ping statistics --- 00:39:28.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:28.441 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:28.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:28.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:39:28.441 00:39:28.441 --- 10.0.0.1 ping statistics --- 00:39:28.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:28.441 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1252459 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1252459 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1252459 ']' 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:28.441 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:28.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:28.442 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:28.442 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:28.442 [2024-12-16 03:00:58.345164] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:28.442 [2024-12-16 03:00:58.346059] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:28.442 [2024-12-16 03:00:58.346092] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:28.442 [2024-12-16 03:00:58.421246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:28.442 [2024-12-16 03:00:58.444916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:28.442 [2024-12-16 03:00:58.444953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:28.443 [2024-12-16 03:00:58.444960] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:28.443 [2024-12-16 03:00:58.444966] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:28.443 [2024-12-16 03:00:58.444971] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:28.443 [2024-12-16 03:00:58.446412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:28.443 [2024-12-16 03:00:58.446524] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:39:28.443 [2024-12-16 03:00:58.446631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.443 [2024-12-16 03:00:58.446632] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:39:28.443 [2024-12-16 03:00:58.446965] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:28.443 [2024-12-16 03:00:58.594254] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:28.443 [2024-12-16 03:00:58.595099] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:28.443 [2024-12-16 03:00:58.595310] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:28.443 [2024-12-16 03:00:58.595403] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:28.443 [2024-12-16 03:00:58.607405] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:28.443 Malloc0 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:28.443 [2024-12-16 03:00:58.683608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1252571 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1252574 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:28.443 { 00:39:28.443 "params": { 00:39:28.443 "name": "Nvme$subsystem", 00:39:28.443 "trtype": "$TEST_TRANSPORT", 00:39:28.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:28.443 "adrfam": "ipv4", 00:39:28.443 "trsvcid": "$NVMF_PORT", 00:39:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:28.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:28.443 "hdgst": ${hdgst:-false}, 00:39:28.443 "ddgst": ${ddgst:-false} 00:39:28.443 }, 00:39:28.443 "method": "bdev_nvme_attach_controller" 00:39:28.443 } 00:39:28.443 EOF 00:39:28.443 )") 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1252577 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:28.443 { 00:39:28.443 "params": { 00:39:28.443 "name": "Nvme$subsystem", 00:39:28.443 "trtype": "$TEST_TRANSPORT", 00:39:28.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:28.443 "adrfam": "ipv4", 00:39:28.443 "trsvcid": "$NVMF_PORT", 00:39:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:28.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:28.443 "hdgst": ${hdgst:-false}, 00:39:28.443 "ddgst": ${ddgst:-false} 00:39:28.443 }, 00:39:28.443 "method": "bdev_nvme_attach_controller" 00:39:28.443 } 00:39:28.443 EOF 00:39:28.443 )") 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1252580 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:28.443 { 00:39:28.443 "params": { 00:39:28.443 "name": "Nvme$subsystem", 00:39:28.443 "trtype": "$TEST_TRANSPORT", 00:39:28.443 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:28.443 "adrfam": "ipv4", 00:39:28.443 "trsvcid": "$NVMF_PORT", 00:39:28.443 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:28.443 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:28.443 "hdgst": ${hdgst:-false}, 00:39:28.443 "ddgst": ${ddgst:-false} 00:39:28.443 }, 00:39:28.443 "method": "bdev_nvme_attach_controller" 00:39:28.443 } 00:39:28.443 EOF 00:39:28.443 )") 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:28.443 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:28.444 { 00:39:28.444 "params": { 00:39:28.444 "name": "Nvme$subsystem", 00:39:28.444 "trtype": "$TEST_TRANSPORT", 00:39:28.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:28.444 "adrfam": "ipv4", 00:39:28.444 "trsvcid": "$NVMF_PORT", 00:39:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:28.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:28.444 "hdgst": ${hdgst:-false}, 00:39:28.444 "ddgst": ${ddgst:-false} 00:39:28.444 }, 00:39:28.444 "method": "bdev_nvme_attach_controller" 00:39:28.444 } 00:39:28.444 EOF 00:39:28.444 )") 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1252571 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:28.444 "params": { 00:39:28.444 "name": "Nvme1", 00:39:28.444 "trtype": "tcp", 00:39:28.444 "traddr": "10.0.0.2", 00:39:28.444 "adrfam": "ipv4", 00:39:28.444 "trsvcid": "4420", 00:39:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:28.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:28.444 "hdgst": false, 00:39:28.444 "ddgst": false 00:39:28.444 }, 00:39:28.444 "method": "bdev_nvme_attach_controller" 00:39:28.444 }' 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:28.444 "params": { 00:39:28.444 "name": "Nvme1", 00:39:28.444 "trtype": "tcp", 00:39:28.444 "traddr": "10.0.0.2", 00:39:28.444 "adrfam": "ipv4", 00:39:28.444 "trsvcid": "4420", 00:39:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:28.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:28.444 "hdgst": false, 00:39:28.444 "ddgst": false 00:39:28.444 }, 00:39:28.444 "method": "bdev_nvme_attach_controller" 00:39:28.444 }' 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:28.444 "params": { 00:39:28.444 "name": "Nvme1", 00:39:28.444 "trtype": "tcp", 00:39:28.444 "traddr": "10.0.0.2", 00:39:28.444 "adrfam": "ipv4", 00:39:28.444 "trsvcid": "4420", 00:39:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:28.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:28.444 "hdgst": false, 00:39:28.444 "ddgst": false 00:39:28.444 }, 00:39:28.444 "method": "bdev_nvme_attach_controller" 00:39:28.444 }' 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:28.444 03:00:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:28.444 "params": { 00:39:28.444 "name": "Nvme1", 00:39:28.444 "trtype": "tcp", 00:39:28.444 "traddr": "10.0.0.2", 00:39:28.444 "adrfam": "ipv4", 00:39:28.444 "trsvcid": "4420", 00:39:28.444 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:28.444 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:28.444 "hdgst": false, 00:39:28.444 "ddgst": false 00:39:28.444 }, 00:39:28.444 "method": "bdev_nvme_attach_controller" 00:39:28.444 }' 00:39:28.444 [2024-12-16 03:00:58.733170] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:28.444 [2024-12-16 03:00:58.733220] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:39:28.444 [2024-12-16 03:00:58.735459] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:28.444 [2024-12-16 03:00:58.735500] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:39:28.444 [2024-12-16 03:00:58.737816] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:28.444 [2024-12-16 03:00:58.737860] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:39:28.444 [2024-12-16 03:00:58.738297] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:28.444 [2024-12-16 03:00:58.738334] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:39:28.444 [2024-12-16 03:00:58.910376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:28.444 [2024-12-16 03:00:58.927775] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:39:28.444 [2024-12-16 03:00:59.007623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:28.444 [2024-12-16 03:00:59.024704] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:39:28.703 [2024-12-16 03:00:59.107520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:28.703 [2024-12-16 03:00:59.124792] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:39:28.703 [2024-12-16 03:00:59.205559] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:28.703 [2024-12-16 03:00:59.226916] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:39:28.703 Running I/O for 1 seconds... 00:39:28.962 Running I/O for 1 seconds... 00:39:28.962 Running I/O for 1 seconds... 00:39:28.962 Running I/O for 1 seconds... 00:39:29.899 11932.00 IOPS, 46.61 MiB/s 00:39:29.899 Latency(us) 00:39:29.899 [2024-12-16T02:01:00.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:29.899 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:29.899 Nvme1n1 : 1.01 11992.52 46.85 0.00 0.00 10638.21 1458.96 12982.37 00:39:29.899 [2024-12-16T02:01:00.558Z] =================================================================================================================== 00:39:29.899 [2024-12-16T02:01:00.558Z] Total : 11992.52 46.85 0.00 0.00 10638.21 1458.96 12982.37 00:39:29.899 10238.00 IOPS, 39.99 MiB/s [2024-12-16T02:01:00.558Z] 243016.00 IOPS, 949.28 MiB/s 00:39:29.899 Latency(us) 00:39:29.899 [2024-12-16T02:01:00.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:29.899 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:29.899 Nvme1n1 : 1.00 242652.98 947.86 0.00 0.00 525.02 219.43 1482.36 00:39:29.899 [2024-12-16T02:01:00.558Z] =================================================================================================================== 00:39:29.899 [2024-12-16T02:01:00.558Z] Total : 242652.98 947.86 0.00 0.00 525.02 219.43 1482.36 00:39:29.899 00:39:29.899 Latency(us) 00:39:29.899 [2024-12-16T02:01:00.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:29.899 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:29.899 Nvme1n1 : 1.01 10296.77 40.22 0.00 0.00 12386.21 4369.07 14917.24 00:39:29.899 [2024-12-16T02:01:00.558Z] =================================================================================================================== 00:39:29.899 [2024-12-16T02:01:00.558Z] Total : 10296.77 40.22 0.00 0.00 12386.21 4369.07 14917.24 00:39:29.899 11614.00 IOPS, 45.37 MiB/s 00:39:29.899 Latency(us) 00:39:29.899 [2024-12-16T02:01:00.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:29.899 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:29.899 Nvme1n1 : 1.00 11702.98 45.71 0.00 0.00 10914.37 2044.10 16602.45 00:39:29.899 [2024-12-16T02:01:00.558Z] =================================================================================================================== 00:39:29.899 [2024-12-16T02:01:00.558Z] Total : 11702.98 45.71 0.00 0.00 10914.37 2044.10 16602.45 00:39:29.899 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1252574 00:39:30.158 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1252577 00:39:30.158 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1252580 00:39:30.158 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:30.158 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:30.158 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:30.158 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:30.158 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:30.159 rmmod nvme_tcp 00:39:30.159 rmmod nvme_fabrics 00:39:30.159 rmmod nvme_keyring 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1252459 ']' 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1252459 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1252459 ']' 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1252459 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1252459 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1252459' 00:39:30.159 killing process with pid 1252459 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1252459 00:39:30.159 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1252459 00:39:30.417 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:30.417 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:30.417 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:30.417 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:30.417 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:30.417 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:30.417 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:30.417 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:30.417 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:30.417 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:30.417 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:30.417 03:01:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:32.321 03:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:32.321 00:39:32.322 real 0m10.877s 00:39:32.322 user 0m14.853s 00:39:32.322 sys 0m6.633s 00:39:32.322 03:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:32.322 03:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:32.322 ************************************ 00:39:32.322 END TEST nvmf_bdev_io_wait 00:39:32.322 ************************************ 00:39:32.581 03:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:32.581 03:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:32.581 03:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:32.581 03:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:32.581 ************************************ 00:39:32.581 START TEST nvmf_queue_depth 00:39:32.581 ************************************ 00:39:32.581 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:32.581 * Looking for test storage... 00:39:32.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:32.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.582 --rc genhtml_branch_coverage=1 00:39:32.582 --rc genhtml_function_coverage=1 00:39:32.582 --rc genhtml_legend=1 00:39:32.582 --rc geninfo_all_blocks=1 00:39:32.582 --rc geninfo_unexecuted_blocks=1 00:39:32.582 00:39:32.582 ' 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:32.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.582 --rc genhtml_branch_coverage=1 00:39:32.582 --rc genhtml_function_coverage=1 00:39:32.582 --rc genhtml_legend=1 00:39:32.582 --rc geninfo_all_blocks=1 00:39:32.582 --rc geninfo_unexecuted_blocks=1 00:39:32.582 00:39:32.582 ' 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:32.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.582 --rc genhtml_branch_coverage=1 00:39:32.582 --rc genhtml_function_coverage=1 00:39:32.582 --rc genhtml_legend=1 00:39:32.582 --rc geninfo_all_blocks=1 00:39:32.582 --rc geninfo_unexecuted_blocks=1 00:39:32.582 00:39:32.582 ' 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:32.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:32.582 --rc genhtml_branch_coverage=1 00:39:32.582 --rc genhtml_function_coverage=1 00:39:32.582 --rc genhtml_legend=1 00:39:32.582 --rc geninfo_all_blocks=1 00:39:32.582 --rc geninfo_unexecuted_blocks=1 00:39:32.582 00:39:32.582 ' 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:32.582 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:32.583 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:32.583 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:32.583 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:32.583 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:32.583 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:32.583 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:32.583 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:32.842 03:01:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:39.409 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:39.410 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:39.410 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:39.410 Found net devices under 0000:af:00.0: cvl_0_0 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:39.410 Found net devices under 0000:af:00.1: cvl_0_1 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:39.410 03:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:39.410 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:39.410 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:39:39.410 00:39:39.410 --- 10.0.0.2 ping statistics --- 00:39:39.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.410 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:39.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:39.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:39:39.410 00:39:39.410 --- 10.0.0.1 ping statistics --- 00:39:39.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:39.410 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:39.410 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1256409 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1256409 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1256409 ']' 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:39.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.411 [2024-12-16 03:01:09.221771] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:39.411 [2024-12-16 03:01:09.222741] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:39.411 [2024-12-16 03:01:09.222778] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:39.411 [2024-12-16 03:01:09.307372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.411 [2024-12-16 03:01:09.328745] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:39.411 [2024-12-16 03:01:09.328780] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:39.411 [2024-12-16 03:01:09.328787] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:39.411 [2024-12-16 03:01:09.328793] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:39.411 [2024-12-16 03:01:09.328798] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:39.411 [2024-12-16 03:01:09.329322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:39.411 [2024-12-16 03:01:09.391727] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:39.411 [2024-12-16 03:01:09.391948] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.411 [2024-12-16 03:01:09.457993] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.411 Malloc0 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.411 [2024-12-16 03:01:09.537998] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1256428 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1256428 /var/tmp/bdevperf.sock 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1256428 ']' 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:39.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.411 [2024-12-16 03:01:09.589175] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:39.411 [2024-12-16 03:01:09.589219] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1256428 ] 00:39:39.411 [2024-12-16 03:01:09.662419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.411 [2024-12-16 03:01:09.684966] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:39.411 NVMe0n1 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.411 03:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:39.411 Running I/O for 10 seconds... 00:39:41.724 11763.00 IOPS, 45.95 MiB/s [2024-12-16T02:01:13.321Z] 12226.50 IOPS, 47.76 MiB/s [2024-12-16T02:01:14.257Z] 12292.00 IOPS, 48.02 MiB/s [2024-12-16T02:01:15.194Z] 12378.25 IOPS, 48.35 MiB/s [2024-12-16T02:01:16.127Z] 12449.00 IOPS, 48.63 MiB/s [2024-12-16T02:01:17.062Z] 12468.67 IOPS, 48.71 MiB/s [2024-12-16T02:01:17.998Z] 12518.86 IOPS, 48.90 MiB/s [2024-12-16T02:01:19.374Z] 12542.62 IOPS, 48.99 MiB/s [2024-12-16T02:01:20.311Z] 12545.22 IOPS, 49.00 MiB/s [2024-12-16T02:01:20.311Z] 12575.90 IOPS, 49.12 MiB/s 00:39:49.652 Latency(us) 00:39:49.652 [2024-12-16T02:01:20.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:49.652 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:49.652 Verification LBA range: start 0x0 length 0x4000 00:39:49.652 NVMe0n1 : 10.07 12589.11 49.18 0.00 0.00 81050.87 19223.89 52928.12 00:39:49.652 [2024-12-16T02:01:20.311Z] =================================================================================================================== 00:39:49.652 [2024-12-16T02:01:20.311Z] Total : 12589.11 49.18 0.00 0.00 81050.87 19223.89 52928.12 00:39:49.652 { 00:39:49.652 "results": [ 00:39:49.652 { 00:39:49.652 "job": "NVMe0n1", 00:39:49.652 "core_mask": "0x1", 00:39:49.652 "workload": "verify", 00:39:49.652 "status": "finished", 00:39:49.652 "verify_range": { 00:39:49.652 "start": 0, 00:39:49.652 "length": 16384 00:39:49.652 }, 00:39:49.652 "queue_depth": 1024, 00:39:49.652 "io_size": 4096, 00:39:49.652 "runtime": 10.065925, 00:39:49.652 "iops": 12589.106316607764, 00:39:49.652 "mibps": 49.17619654924908, 00:39:49.652 "io_failed": 0, 00:39:49.652 "io_timeout": 0, 00:39:49.652 "avg_latency_us": 81050.87146663781, 00:39:49.652 "min_latency_us": 19223.893333333333, 00:39:49.652 "max_latency_us": 52928.1219047619 00:39:49.652 } 00:39:49.652 ], 00:39:49.652 "core_count": 1 00:39:49.652 } 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1256428 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1256428 ']' 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1256428 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1256428 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1256428' 00:39:49.652 killing process with pid 1256428 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1256428 00:39:49.652 Received shutdown signal, test time was about 10.000000 seconds 00:39:49.652 00:39:49.652 Latency(us) 00:39:49.652 [2024-12-16T02:01:20.311Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:49.652 [2024-12-16T02:01:20.311Z] =================================================================================================================== 00:39:49.652 [2024-12-16T02:01:20.311Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1256428 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:49.652 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:49.652 rmmod nvme_tcp 00:39:49.652 rmmod nvme_fabrics 00:39:49.652 rmmod nvme_keyring 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1256409 ']' 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1256409 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1256409 ']' 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1256409 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1256409 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1256409' 00:39:49.911 killing process with pid 1256409 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1256409 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1256409 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:49.911 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:50.170 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:50.170 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:50.170 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:50.170 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:50.170 03:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:52.092 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:52.092 00:39:52.092 real 0m19.612s 00:39:52.092 user 0m22.570s 00:39:52.092 sys 0m6.185s 00:39:52.092 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:52.092 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:52.092 ************************************ 00:39:52.092 END TEST nvmf_queue_depth 00:39:52.092 ************************************ 00:39:52.092 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:52.092 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:52.092 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:52.092 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:52.092 ************************************ 00:39:52.092 START TEST nvmf_target_multipath 00:39:52.092 ************************************ 00:39:52.092 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:52.352 * Looking for test storage... 00:39:52.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:52.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.352 --rc genhtml_branch_coverage=1 00:39:52.352 --rc genhtml_function_coverage=1 00:39:52.352 --rc genhtml_legend=1 00:39:52.352 --rc geninfo_all_blocks=1 00:39:52.352 --rc geninfo_unexecuted_blocks=1 00:39:52.352 00:39:52.352 ' 00:39:52.352 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:52.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.353 --rc genhtml_branch_coverage=1 00:39:52.353 --rc genhtml_function_coverage=1 00:39:52.353 --rc genhtml_legend=1 00:39:52.353 --rc geninfo_all_blocks=1 00:39:52.353 --rc geninfo_unexecuted_blocks=1 00:39:52.353 00:39:52.353 ' 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:52.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.353 --rc genhtml_branch_coverage=1 00:39:52.353 --rc genhtml_function_coverage=1 00:39:52.353 --rc genhtml_legend=1 00:39:52.353 --rc geninfo_all_blocks=1 00:39:52.353 --rc geninfo_unexecuted_blocks=1 00:39:52.353 00:39:52.353 ' 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:52.353 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:52.353 --rc genhtml_branch_coverage=1 00:39:52.353 --rc genhtml_function_coverage=1 00:39:52.353 --rc genhtml_legend=1 00:39:52.353 --rc geninfo_all_blocks=1 00:39:52.353 --rc geninfo_unexecuted_blocks=1 00:39:52.353 00:39:52.353 ' 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:52.353 03:01:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:58.921 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:58.921 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:58.921 Found net devices under 0000:af:00.0: cvl_0_0 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:58.921 Found net devices under 0000:af:00.1: cvl_0_1 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:58.921 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:58.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:58.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:39:58.922 00:39:58.922 --- 10.0.0.2 ping statistics --- 00:39:58.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:58.922 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:58.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:58.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:39:58.922 00:39:58.922 --- 10.0.0.1 ping statistics --- 00:39:58.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:58.922 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:58.922 only one NIC for nvmf test 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:58.922 rmmod nvme_tcp 00:39:58.922 rmmod nvme_fabrics 00:39:58.922 rmmod nvme_keyring 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:58.922 03:01:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:00.299 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:00.557 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:00.558 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:00.558 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:00.558 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:00.558 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:00.558 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:00.558 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.558 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:00.558 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:00.558 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:00.558 00:40:00.558 real 0m8.287s 00:40:00.558 user 0m1.785s 00:40:00.558 sys 0m4.465s 00:40:00.558 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:00.558 03:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:00.558 ************************************ 00:40:00.558 END TEST nvmf_target_multipath 00:40:00.558 ************************************ 00:40:00.558 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:00.558 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:00.558 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:00.558 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:00.558 ************************************ 00:40:00.558 START TEST nvmf_zcopy 00:40:00.558 ************************************ 00:40:00.558 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:00.558 * Looking for test storage... 00:40:00.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:00.558 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:00.558 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:40:00.558 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:00.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.818 --rc genhtml_branch_coverage=1 00:40:00.818 --rc genhtml_function_coverage=1 00:40:00.818 --rc genhtml_legend=1 00:40:00.818 --rc geninfo_all_blocks=1 00:40:00.818 --rc geninfo_unexecuted_blocks=1 00:40:00.818 00:40:00.818 ' 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:00.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.818 --rc genhtml_branch_coverage=1 00:40:00.818 --rc genhtml_function_coverage=1 00:40:00.818 --rc genhtml_legend=1 00:40:00.818 --rc geninfo_all_blocks=1 00:40:00.818 --rc geninfo_unexecuted_blocks=1 00:40:00.818 00:40:00.818 ' 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:00.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.818 --rc genhtml_branch_coverage=1 00:40:00.818 --rc genhtml_function_coverage=1 00:40:00.818 --rc genhtml_legend=1 00:40:00.818 --rc geninfo_all_blocks=1 00:40:00.818 --rc geninfo_unexecuted_blocks=1 00:40:00.818 00:40:00.818 ' 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:00.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:00.818 --rc genhtml_branch_coverage=1 00:40:00.818 --rc genhtml_function_coverage=1 00:40:00.818 --rc genhtml_legend=1 00:40:00.818 --rc geninfo_all_blocks=1 00:40:00.818 --rc geninfo_unexecuted_blocks=1 00:40:00.818 00:40:00.818 ' 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:00.818 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:40:00.819 03:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:07.389 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:07.389 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:07.389 Found net devices under 0000:af:00.0: cvl_0_0 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:07.389 Found net devices under 0000:af:00.1: cvl_0_1 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:40:07.389 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:07.390 03:01:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:07.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:07.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:40:07.390 00:40:07.390 --- 10.0.0.2 ping statistics --- 00:40:07.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.390 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:07.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:07.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:40:07.390 00:40:07.390 --- 10.0.0.1 ping statistics --- 00:40:07.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.390 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1264917 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1264917 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1264917 ']' 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:07.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.390 [2024-12-16 03:01:37.350667] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:07.390 [2024-12-16 03:01:37.351567] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:07.390 [2024-12-16 03:01:37.351601] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:07.390 [2024-12-16 03:01:37.425896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:07.390 [2024-12-16 03:01:37.446408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:07.390 [2024-12-16 03:01:37.446440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:07.390 [2024-12-16 03:01:37.446447] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:07.390 [2024-12-16 03:01:37.446453] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:07.390 [2024-12-16 03:01:37.446457] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:07.390 [2024-12-16 03:01:37.446948] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:07.390 [2024-12-16 03:01:37.508060] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:07.390 [2024-12-16 03:01:37.508256] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.390 [2024-12-16 03:01:37.575618] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.390 [2024-12-16 03:01:37.603803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.390 malloc0 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:07.390 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:07.391 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:07.391 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:07.391 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:40:07.391 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:07.391 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:07.391 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:07.391 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:07.391 { 00:40:07.391 "params": { 00:40:07.391 "name": "Nvme$subsystem", 00:40:07.391 "trtype": "$TEST_TRANSPORT", 00:40:07.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:07.391 "adrfam": "ipv4", 00:40:07.391 "trsvcid": "$NVMF_PORT", 00:40:07.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:07.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:07.391 "hdgst": ${hdgst:-false}, 00:40:07.391 "ddgst": ${ddgst:-false} 00:40:07.391 }, 00:40:07.391 "method": "bdev_nvme_attach_controller" 00:40:07.391 } 00:40:07.391 EOF 00:40:07.391 )") 00:40:07.391 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:07.391 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:07.391 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:07.391 03:01:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:07.391 "params": { 00:40:07.391 "name": "Nvme1", 00:40:07.391 "trtype": "tcp", 00:40:07.391 "traddr": "10.0.0.2", 00:40:07.391 "adrfam": "ipv4", 00:40:07.391 "trsvcid": "4420", 00:40:07.391 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:07.391 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:07.391 "hdgst": false, 00:40:07.391 "ddgst": false 00:40:07.391 }, 00:40:07.391 "method": "bdev_nvme_attach_controller" 00:40:07.391 }' 00:40:07.391 [2024-12-16 03:01:37.695028] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:07.391 [2024-12-16 03:01:37.695067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264944 ] 00:40:07.391 [2024-12-16 03:01:37.769628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:07.391 [2024-12-16 03:01:37.791766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:07.391 Running I/O for 10 seconds... 00:40:09.705 8633.00 IOPS, 67.45 MiB/s [2024-12-16T02:01:41.299Z] 8675.00 IOPS, 67.77 MiB/s [2024-12-16T02:01:42.236Z] 8664.67 IOPS, 67.69 MiB/s [2024-12-16T02:01:43.172Z] 8673.25 IOPS, 67.76 MiB/s [2024-12-16T02:01:44.108Z] 8675.40 IOPS, 67.78 MiB/s [2024-12-16T02:01:45.044Z] 8686.50 IOPS, 67.86 MiB/s [2024-12-16T02:01:46.029Z] 8692.29 IOPS, 67.91 MiB/s [2024-12-16T02:01:47.042Z] 8699.62 IOPS, 67.97 MiB/s [2024-12-16T02:01:48.421Z] 8698.67 IOPS, 67.96 MiB/s [2024-12-16T02:01:48.421Z] 8700.60 IOPS, 67.97 MiB/s 00:40:17.762 Latency(us) 00:40:17.762 [2024-12-16T02:01:48.421Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:17.762 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:40:17.762 Verification LBA range: start 0x0 length 0x1000 00:40:17.762 Nvme1n1 : 10.01 8705.37 68.01 0.00 0.00 14661.67 873.81 21470.84 00:40:17.762 [2024-12-16T02:01:48.421Z] =================================================================================================================== 00:40:17.762 [2024-12-16T02:01:48.421Z] Total : 8705.37 68.01 0.00 0.00 14661.67 873.81 21470.84 00:40:17.762 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1266713 00:40:17.762 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:40:17.762 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:17.762 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:40:17.762 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:40:17.762 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:17.762 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:17.762 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:17.762 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:17.762 { 00:40:17.762 "params": { 00:40:17.762 "name": "Nvme$subsystem", 00:40:17.762 "trtype": "$TEST_TRANSPORT", 00:40:17.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:17.762 "adrfam": "ipv4", 00:40:17.762 "trsvcid": "$NVMF_PORT", 00:40:17.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:17.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:17.762 "hdgst": ${hdgst:-false}, 00:40:17.762 "ddgst": ${ddgst:-false} 00:40:17.762 }, 00:40:17.762 "method": "bdev_nvme_attach_controller" 00:40:17.762 } 00:40:17.762 EOF 00:40:17.762 )") 00:40:17.762 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:17.762 [2024-12-16 03:01:48.171274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.171309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:17.762 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:17.762 03:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:17.762 "params": { 00:40:17.762 "name": "Nvme1", 00:40:17.762 "trtype": "tcp", 00:40:17.762 "traddr": "10.0.0.2", 00:40:17.762 "adrfam": "ipv4", 00:40:17.762 "trsvcid": "4420", 00:40:17.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:17.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:17.762 "hdgst": false, 00:40:17.762 "ddgst": false 00:40:17.762 }, 00:40:17.762 "method": "bdev_nvme_attach_controller" 00:40:17.762 }' 00:40:17.762 [2024-12-16 03:01:48.183242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.183256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.195239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.195249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.207237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.207247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.208172] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:17.762 [2024-12-16 03:01:48.208216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1266713 ] 00:40:17.762 [2024-12-16 03:01:48.219238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.219250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.231237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.231247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.243237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.243247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.255237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.255247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.267239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.267249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.279238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.279248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.281096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:17.762 [2024-12-16 03:01:48.291240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.291256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.303124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:17.762 [2024-12-16 03:01:48.303241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.303254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.315250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.315268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.327250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.327271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.339248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.339264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.351238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.351251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.363243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.363259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.762 [2024-12-16 03:01:48.375239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.762 [2024-12-16 03:01:48.375251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.763 [2024-12-16 03:01:48.387252] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.763 [2024-12-16 03:01:48.387278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.763 [2024-12-16 03:01:48.399248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.763 [2024-12-16 03:01:48.399265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:17.763 [2024-12-16 03:01:48.411241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:17.763 [2024-12-16 03:01:48.411256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.423241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.423252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.435240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.435252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.447237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.447248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.459242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.459257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.471236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.471246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.483239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.483249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.495243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.495256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.507241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.507254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.519239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.519250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.531236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.531246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.543236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.543246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.555241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.555255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.567237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.567247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.579238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.579248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.591236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.591248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.603241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.603259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 Running I/O for 5 seconds... 00:40:18.023 [2024-12-16 03:01:48.620943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.620963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.635397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.635416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.648949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.648968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.663387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.663406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.023 [2024-12-16 03:01:48.675398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.023 [2024-12-16 03:01:48.675416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.689018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.689038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.704175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.704195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.719244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.719268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.731986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.732005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.744586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.744605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.755521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.755539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.769187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.769206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.783905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.783924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.798793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.798811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.809794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.809813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.824373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.824392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.838728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.838747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.853435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.853454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.868289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.868309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.882813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.882833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.895775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.895794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.908524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.908542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.919717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.282 [2024-12-16 03:01:48.919735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.282 [2024-12-16 03:01:48.933060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.283 [2024-12-16 03:01:48.933079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:48.947613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:48.947633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:48.962765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:48.962784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:48.977280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:48.977299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:48.991649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:48.991668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.007501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.007521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.020284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.020304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.031682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.031700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.044966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.044985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.059737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.059756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.075182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.075201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.087867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.087886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.101064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.101083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.116030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.116048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.126778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.126806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.140584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.140603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.155097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.155117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.168434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.168454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.179503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.179521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.542 [2024-12-16 03:01:49.193197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.542 [2024-12-16 03:01:49.193217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.207946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.207966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.222859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.222878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.236724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.236742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.246948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.246968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.261238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.261258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.275838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.275873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.291400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.291420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.304101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.304119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.316482] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.316501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.331519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.331538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.346651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.346670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.360553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.360573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.375452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.375472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.388027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.388051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.403668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.403692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.418790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.418810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.432339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.432359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.443544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.443563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:18.801 [2024-12-16 03:01:49.457302] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:18.801 [2024-12-16 03:01:49.457322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.060 [2024-12-16 03:01:49.472044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.060 [2024-12-16 03:01:49.472063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.060 [2024-12-16 03:01:49.486937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.060 [2024-12-16 03:01:49.486957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.060 [2024-12-16 03:01:49.500608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.060 [2024-12-16 03:01:49.500627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.060 [2024-12-16 03:01:49.515073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.060 [2024-12-16 03:01:49.515095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.060 [2024-12-16 03:01:49.527683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.060 [2024-12-16 03:01:49.527702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.060 [2024-12-16 03:01:49.540859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.060 [2024-12-16 03:01:49.540878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.060 [2024-12-16 03:01:49.555306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.060 [2024-12-16 03:01:49.555326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.060 [2024-12-16 03:01:49.566161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.060 [2024-12-16 03:01:49.566180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.060 [2024-12-16 03:01:49.580761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.060 [2024-12-16 03:01:49.580780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.060 [2024-12-16 03:01:49.595197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.061 [2024-12-16 03:01:49.595215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.061 [2024-12-16 03:01:49.606054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.061 [2024-12-16 03:01:49.606072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.061 16949.00 IOPS, 132.41 MiB/s [2024-12-16T02:01:49.720Z] [2024-12-16 03:01:49.620661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.061 [2024-12-16 03:01:49.620680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.061 [2024-12-16 03:01:49.631043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.061 [2024-12-16 03:01:49.631062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.061 [2024-12-16 03:01:49.644800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.061 [2024-12-16 03:01:49.644830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.061 [2024-12-16 03:01:49.654714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.061 [2024-12-16 03:01:49.654734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.061 [2024-12-16 03:01:49.668875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.061 [2024-12-16 03:01:49.668894] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.061 [2024-12-16 03:01:49.683480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.061 [2024-12-16 03:01:49.683499] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.061 [2024-12-16 03:01:49.694057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.061 [2024-12-16 03:01:49.694076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.061 [2024-12-16 03:01:49.709237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.061 [2024-12-16 03:01:49.709257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.723730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.723751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.735729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.735748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.750880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.750900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.765083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.765101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.779396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.779414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.790657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.790675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.804888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.804907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.818830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.818855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.833414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.833433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.848193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.848217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.863407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.863426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.876305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.876323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.890832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.890855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.905011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.905030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.919926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.919944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.935486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.935505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.946973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.946992] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.961551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.961570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.320 [2024-12-16 03:01:49.976619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.320 [2024-12-16 03:01:49.976639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.579 [2024-12-16 03:01:49.990940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.579 [2024-12-16 03:01:49.990959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.579 [2024-12-16 03:01:50.004452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.579 [2024-12-16 03:01:50.004471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.579 [2024-12-16 03:01:50.020223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.579 [2024-12-16 03:01:50.020242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.579 [2024-12-16 03:01:50.035905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.579 [2024-12-16 03:01:50.035925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.579 [2024-12-16 03:01:50.047890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.579 [2024-12-16 03:01:50.047911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.579 [2024-12-16 03:01:50.061106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.579 [2024-12-16 03:01:50.061125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.579 [2024-12-16 03:01:50.076284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.579 [2024-12-16 03:01:50.076304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.579 [2024-12-16 03:01:50.090591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.579 [2024-12-16 03:01:50.090611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.579 [2024-12-16 03:01:50.104943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.579 [2024-12-16 03:01:50.104962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.579 [2024-12-16 03:01:50.120042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.579 [2024-12-16 03:01:50.120065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.579 [2024-12-16 03:01:50.135566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.579 [2024-12-16 03:01:50.135586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.579 [2024-12-16 03:01:50.147864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.579 [2024-12-16 03:01:50.147882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.579 [2024-12-16 03:01:50.161366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.579 [2024-12-16 03:01:50.161386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.580 [2024-12-16 03:01:50.176115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.580 [2024-12-16 03:01:50.176135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.580 [2024-12-16 03:01:50.191691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.580 [2024-12-16 03:01:50.191710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.580 [2024-12-16 03:01:50.202627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.580 [2024-12-16 03:01:50.202646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.580 [2024-12-16 03:01:50.216643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.580 [2024-12-16 03:01:50.216663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.580 [2024-12-16 03:01:50.227641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.580 [2024-12-16 03:01:50.227660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.241065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.241085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.256219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.256237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.271058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.271077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.285538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.285557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.300230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.300250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.315161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.315182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.329574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.329593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.344481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.344501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.359339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.359358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.370605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.370624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.385464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.385483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.399927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.399951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.415325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.415348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.427288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.427308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.441465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.441484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.456173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.456192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.470560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.470579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:19.839 [2024-12-16 03:01:50.485384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:19.839 [2024-12-16 03:01:50.485404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.499835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.499861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.510743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.510761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.525448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.525467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.540419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.540438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.555146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.555165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.566674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.566693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.580879] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.580900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.595730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.595750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.608459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.608479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 16859.00 IOPS, 131.71 MiB/s [2024-12-16T02:01:50.757Z] [2024-12-16 03:01:50.623469] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.623488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.634548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.634567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.649327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.649345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.664266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.664284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.679233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.679252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.690773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.690797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.704670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.098 [2024-12-16 03:01:50.704689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.098 [2024-12-16 03:01:50.719223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.099 [2024-12-16 03:01:50.719244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.099 [2024-12-16 03:01:50.730502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.099 [2024-12-16 03:01:50.730522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.099 [2024-12-16 03:01:50.745369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.099 [2024-12-16 03:01:50.745389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.357 [2024-12-16 03:01:50.760188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.357 [2024-12-16 03:01:50.760207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.357 [2024-12-16 03:01:50.775377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.357 [2024-12-16 03:01:50.775397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.788359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.788379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.803214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.803236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.816990] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.817010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.831748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.831768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.842873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.842909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.857462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.857482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.871784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.871803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.883029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.883049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.896843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.896868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.911647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.911665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.923351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.923371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.937562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.937582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.952517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.952540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.967315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.967334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.980026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.980045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:50.992563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:50.992583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.358 [2024-12-16 03:01:51.007544] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.358 [2024-12-16 03:01:51.007564] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.023474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.023498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.036825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.036844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.051585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.051603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.066799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.066819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.081070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.081089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.095695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.095712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.107940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.107958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.121192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.121211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.136174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.136194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.151649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.151667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.165089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.165108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.180334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.180353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.195477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.195497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.207871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.207891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.220840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.220873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.231626] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.231644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.244679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.244698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.255557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.255575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.617 [2024-12-16 03:01:51.268736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.617 [2024-12-16 03:01:51.268756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.283317] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.283336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.296052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.296070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.311297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.311316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.322678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.322697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.337421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.337440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.351915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.351933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.367077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.367096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.380199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.380218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.395111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.395130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.408900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.408920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.423248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.423267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.436401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.436420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.447436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.447455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.461377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.461395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.476228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.476250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.491608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.876 [2024-12-16 03:01:51.491627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.876 [2024-12-16 03:01:51.503383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.877 [2024-12-16 03:01:51.503402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.877 [2024-12-16 03:01:51.517446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.877 [2024-12-16 03:01:51.517465] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:20.877 [2024-12-16 03:01:51.532292] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:20.877 [2024-12-16 03:01:51.532311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.547024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.547044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.561151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.561170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.575951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.575970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.591602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.591621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.604191] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.604209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.619057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.619076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 16840.67 IOPS, 131.57 MiB/s [2024-12-16T02:01:51.795Z] [2024-12-16 03:01:51.630538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.630557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.645156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.645175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.659638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.659656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.674703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.674722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.689030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.689050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.703519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.703537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.716834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.716858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.731698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.731717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.743539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.743557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.756872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.756891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.771896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.771915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.136 [2024-12-16 03:01:51.787060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.136 [2024-12-16 03:01:51.787081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.800672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.800692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.810934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.810953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.824921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.824941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.839323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.839342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.851719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.851737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.865494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.865513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.880667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.880686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.890402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.890421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.905081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.905101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.919707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.919725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.931346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.931365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.944731] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.944750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.959803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.959821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.975178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.975197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:51.987282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:51.987301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:52.001368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:52.001387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:52.016172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:52.016192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:52.032010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:52.032032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.396 [2024-12-16 03:01:52.047242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.396 [2024-12-16 03:01:52.047263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.060087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.060106] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.072695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.072713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.087876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.087897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.099351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.099370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.113439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.113458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.127812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.127830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.143713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.143732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.158723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.158742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.173045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.173064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.187069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.187089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.200514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.200533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.211664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.211684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.225465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.225483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.240305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.240325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.255364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.255384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.268788] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.268807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.283427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.283447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.293834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.293861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.655 [2024-12-16 03:01:52.308225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.655 [2024-12-16 03:01:52.308244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.322984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.323004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.336916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.336936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.351135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.351155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.365215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.365234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.379621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.379640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.392150] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.392169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.404680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.404700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.419350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.419369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.431616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.431635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.445346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.445365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.460139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.460158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.475044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.475063] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.488732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.488751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.503447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.503466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.515084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.515107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.529375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.529394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.544099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.544117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.559537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.559556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:21.915 [2024-12-16 03:01:52.571774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:21.915 [2024-12-16 03:01:52.571792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.584953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.584972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.599992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.600011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.615122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.615140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 16864.75 IOPS, 131.76 MiB/s [2024-12-16T02:01:52.833Z] [2024-12-16 03:01:52.629118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.629137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.643749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.643767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.659684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.659703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.675050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.675069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.689311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.689330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.703326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.703350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.714390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.714409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.729106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.729124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.743940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.743958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.758820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.758839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.772743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.772761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.787445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.174 [2024-12-16 03:01:52.787468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.174 [2024-12-16 03:01:52.799866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.175 [2024-12-16 03:01:52.799885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.175 [2024-12-16 03:01:52.812896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.175 [2024-12-16 03:01:52.812914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.175 [2024-12-16 03:01:52.827550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.175 [2024-12-16 03:01:52.827568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:52.842809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:52.842833] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:52.856855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:52.856874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:52.871520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:52.871546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:52.887522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:52.887541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:52.899820] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:52.899838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:52.912967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:52.912987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:52.928247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:52.928267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:52.943645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:52.943663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:52.958948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:52.958967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:52.973082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:52.973101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:52.987679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:52.987697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:53.003280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:53.003298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:53.016019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:53.016037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:53.031083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:53.031102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:53.044192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:53.044211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:53.058992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:53.059028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:53.072862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:53.072881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.433 [2024-12-16 03:01:53.087697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.433 [2024-12-16 03:01:53.087717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.102922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.102943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.117280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.117300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.131605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.131625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.144014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.144033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.157151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.157169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.171250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.171268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.184771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.184790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.199202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.199221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.211986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.212005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.224497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.224516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.239270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.239290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.252881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.252901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.267277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.267296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.280399] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.280419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.295164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.295184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.307819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.307838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.320727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.320746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.331600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.331619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.692 [2024-12-16 03:01:53.344773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.692 [2024-12-16 03:01:53.344791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.951 [2024-12-16 03:01:53.355602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.951 [2024-12-16 03:01:53.355621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.951 [2024-12-16 03:01:53.369118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.951 [2024-12-16 03:01:53.369136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.951 [2024-12-16 03:01:53.383330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.383349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.396148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.396166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.407197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.407215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.420818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.420837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.435388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.435406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.446070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.446090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.460411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.460430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.475203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.475223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.488438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.488459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.502979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.502998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.514092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.514112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.529088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.529107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.544119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.544139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.555475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.555494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.570382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.570402] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.584566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.584586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.594419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.594439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:22.952 [2024-12-16 03:01:53.609057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:22.952 [2024-12-16 03:01:53.609076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.211 [2024-12-16 03:01:53.623407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.211 [2024-12-16 03:01:53.623427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.211 16890.20 IOPS, 131.95 MiB/s [2024-12-16T02:01:53.871Z] [2024-12-16 03:01:53.631255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.212 [2024-12-16 03:01:53.631274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.212 00:40:23.212 Latency(us) 00:40:23.212 [2024-12-16T02:01:53.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:23.212 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:40:23.212 Nvme1n1 : 5.01 16894.15 131.99 0.00 0.00 7569.78 1927.07 13294.45 00:40:23.212 [2024-12-16T02:01:53.871Z] =================================================================================================================== 00:40:23.212 [2024-12-16T02:01:53.871Z] Total : 16894.15 131.99 0.00 0.00 7569.78 1927.07 13294.45 00:40:23.212 [2024-12-16 03:01:53.643245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.212 [2024-12-16 03:01:53.643262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.212 [2024-12-16 03:01:53.655256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.212 [2024-12-16 03:01:53.655275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.212 [2024-12-16 03:01:53.667256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.212 [2024-12-16 03:01:53.667279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.212 [2024-12-16 03:01:53.679250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.212 [2024-12-16 03:01:53.679271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.212 [2024-12-16 03:01:53.691247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.212 [2024-12-16 03:01:53.691263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.212 [2024-12-16 03:01:53.703248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.212 [2024-12-16 03:01:53.703266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.212 [2024-12-16 03:01:53.715246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.212 [2024-12-16 03:01:53.715263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.212 [2024-12-16 03:01:53.727251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.212 [2024-12-16 03:01:53.727269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.212 [2024-12-16 03:01:53.739241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.212 [2024-12-16 03:01:53.739256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.212 [2024-12-16 03:01:53.751239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.212 [2024-12-16 03:01:53.751257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.212 [2024-12-16 03:01:53.763244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.212 [2024-12-16 03:01:53.763260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.212 [2024-12-16 03:01:53.775238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:23.212 [2024-12-16 03:01:53.775249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:23.212 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1266713) - No such process 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1266713 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:23.212 delay0 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:23.212 03:01:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:40:23.471 [2024-12-16 03:01:53.879355] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:30.038 Initializing NVMe Controllers 00:40:30.039 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:30.039 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:30.039 Initialization complete. Launching workers. 00:40:30.039 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 879 00:40:30.039 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1161, failed to submit 38 00:40:30.039 success 1014, unsuccessful 147, failed 0 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:30.039 rmmod nvme_tcp 00:40:30.039 rmmod nvme_fabrics 00:40:30.039 rmmod nvme_keyring 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1264917 ']' 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1264917 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1264917 ']' 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1264917 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1264917 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1264917' 00:40:30.039 killing process with pid 1264917 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1264917 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1264917 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:30.039 03:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:32.575 00:40:32.575 real 0m31.586s 00:40:32.575 user 0m40.730s 00:40:32.575 sys 0m12.426s 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:32.575 ************************************ 00:40:32.575 END TEST nvmf_zcopy 00:40:32.575 ************************************ 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:32.575 ************************************ 00:40:32.575 START TEST nvmf_nmic 00:40:32.575 ************************************ 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:32.575 * Looking for test storage... 00:40:32.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:32.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.575 --rc genhtml_branch_coverage=1 00:40:32.575 --rc genhtml_function_coverage=1 00:40:32.575 --rc genhtml_legend=1 00:40:32.575 --rc geninfo_all_blocks=1 00:40:32.575 --rc geninfo_unexecuted_blocks=1 00:40:32.575 00:40:32.575 ' 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:32.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.575 --rc genhtml_branch_coverage=1 00:40:32.575 --rc genhtml_function_coverage=1 00:40:32.575 --rc genhtml_legend=1 00:40:32.575 --rc geninfo_all_blocks=1 00:40:32.575 --rc geninfo_unexecuted_blocks=1 00:40:32.575 00:40:32.575 ' 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:32.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.575 --rc genhtml_branch_coverage=1 00:40:32.575 --rc genhtml_function_coverage=1 00:40:32.575 --rc genhtml_legend=1 00:40:32.575 --rc geninfo_all_blocks=1 00:40:32.575 --rc geninfo_unexecuted_blocks=1 00:40:32.575 00:40:32.575 ' 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:32.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:32.575 --rc genhtml_branch_coverage=1 00:40:32.575 --rc genhtml_function_coverage=1 00:40:32.575 --rc genhtml_legend=1 00:40:32.575 --rc geninfo_all_blocks=1 00:40:32.575 --rc geninfo_unexecuted_blocks=1 00:40:32.575 00:40:32.575 ' 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.575 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:32.576 03:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:39.148 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:39.148 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:39.148 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:39.148 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:39.148 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:39.148 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:39.148 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:39.148 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:39.148 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:39.149 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:39.149 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:39.149 Found net devices under 0000:af:00.0: cvl_0_0 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:39.149 Found net devices under 0000:af:00.1: cvl_0_1 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:39.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:39.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.449 ms 00:40:39.149 00:40:39.149 --- 10.0.0.2 ping statistics --- 00:40:39.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:39.149 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:40:39.149 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:39.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:39.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:40:39.149 00:40:39.149 --- 10.0.0.1 ping statistics --- 00:40:39.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:39.149 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1271957 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1271957 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1271957 ']' 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:39.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:39.150 03:02:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:39.150 [2024-12-16 03:02:08.886048] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:39.150 [2024-12-16 03:02:08.887020] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:39.150 [2024-12-16 03:02:08.887056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:39.150 [2024-12-16 03:02:08.967784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:39.150 [2024-12-16 03:02:08.991861] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:39.150 [2024-12-16 03:02:08.991898] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:39.150 [2024-12-16 03:02:08.991906] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:39.150 [2024-12-16 03:02:08.991913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:39.150 [2024-12-16 03:02:08.991919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:39.150 [2024-12-16 03:02:08.993386] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:39.150 [2024-12-16 03:02:08.993495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:39.150 [2024-12-16 03:02:08.993511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:39.150 [2024-12-16 03:02:08.993514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:39.150 [2024-12-16 03:02:09.056977] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:39.150 [2024-12-16 03:02:09.057509] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:39.150 [2024-12-16 03:02:09.057700] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:39.150 [2024-12-16 03:02:09.057843] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:39.150 [2024-12-16 03:02:09.057991] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:39.150 [2024-12-16 03:02:09.134328] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:39.150 Malloc0 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:39.150 [2024-12-16 03:02:09.214582] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:39.150 test case1: single bdev can't be used in multiple subsystems 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:39.150 [2024-12-16 03:02:09.250027] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:39.150 [2024-12-16 03:02:09.250053] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:39.150 [2024-12-16 03:02:09.250062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:39.150 request: 00:40:39.150 { 00:40:39.150 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:39.150 "namespace": { 00:40:39.150 "bdev_name": "Malloc0", 00:40:39.150 "no_auto_visible": false, 00:40:39.150 "hide_metadata": false 00:40:39.150 }, 00:40:39.150 "method": "nvmf_subsystem_add_ns", 00:40:39.150 "req_id": 1 00:40:39.150 } 00:40:39.150 Got JSON-RPC error response 00:40:39.150 response: 00:40:39.150 { 00:40:39.150 "code": -32602, 00:40:39.150 "message": "Invalid parameters" 00:40:39.150 } 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:39.150 Adding namespace failed - expected result. 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:39.150 test case2: host connect to nvmf target in multiple paths 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:39.150 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:39.151 [2024-12-16 03:02:09.262110] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:39.151 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:39.151 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:39.151 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:39.151 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:39.151 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:40:39.151 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:39.151 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:39.151 03:02:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:41.688 03:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:41.688 03:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:41.688 03:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:41.688 03:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:41.688 03:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:41.688 03:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:41.688 03:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:41.688 [global] 00:40:41.688 thread=1 00:40:41.688 invalidate=1 00:40:41.688 rw=write 00:40:41.688 time_based=1 00:40:41.688 runtime=1 00:40:41.688 ioengine=libaio 00:40:41.688 direct=1 00:40:41.688 bs=4096 00:40:41.688 iodepth=1 00:40:41.688 norandommap=0 00:40:41.688 numjobs=1 00:40:41.688 00:40:41.688 verify_dump=1 00:40:41.688 verify_backlog=512 00:40:41.688 verify_state_save=0 00:40:41.688 do_verify=1 00:40:41.688 verify=crc32c-intel 00:40:41.688 [job0] 00:40:41.688 filename=/dev/nvme0n1 00:40:41.688 Could not set queue depth (nvme0n1) 00:40:41.688 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:41.688 fio-3.35 00:40:41.688 Starting 1 thread 00:40:42.625 00:40:42.625 job0: (groupid=0, jobs=1): err= 0: pid=1272560: Mon Dec 16 03:02:13 2024 00:40:42.625 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:40:42.625 slat (nsec): min=7066, max=49248, avg=8166.78, stdev=1820.47 00:40:42.625 clat (usec): min=170, max=517, avg=194.13, stdev= 9.79 00:40:42.625 lat (usec): min=187, max=524, avg=202.29, stdev= 9.90 00:40:42.625 clat percentiles (usec): 00:40:42.625 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 190], 00:40:42.625 | 30.00th=[ 192], 40.00th=[ 192], 50.00th=[ 194], 60.00th=[ 196], 00:40:42.625 | 70.00th=[ 196], 80.00th=[ 198], 90.00th=[ 202], 95.00th=[ 204], 00:40:42.625 | 99.00th=[ 212], 99.50th=[ 217], 99.90th=[ 262], 99.95th=[ 420], 00:40:42.625 | 99.99th=[ 519] 00:40:42.625 write: IOPS=2878, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1001msec); 0 zone resets 00:40:42.625 slat (usec): min=10, max=27883, avg=21.54, stdev=519.28 00:40:42.625 clat (usec): min=127, max=850, avg=140.32, stdev=17.47 00:40:42.625 lat (usec): min=139, max=28141, avg=161.86, stdev=521.76 00:40:42.625 clat percentiles (usec): 00:40:42.625 | 1.00th=[ 133], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 135], 00:40:42.625 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 139], 00:40:42.625 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 147], 95.00th=[ 151], 00:40:42.625 | 99.00th=[ 186], 99.50th=[ 202], 99.90th=[ 334], 99.95th=[ 334], 00:40:42.625 | 99.99th=[ 848] 00:40:42.625 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:40:42.625 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:40:42.625 lat (usec) : 250=99.78%, 500=0.18%, 750=0.02%, 1000=0.02% 00:40:42.625 cpu : usr=4.50%, sys=8.70%, ctx=5445, majf=0, minf=1 00:40:42.625 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:42.625 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.625 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:42.625 issued rwts: total=2560,2881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:42.625 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:42.625 00:40:42.625 Run status group 0 (all jobs): 00:40:42.625 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:40:42.625 WRITE: bw=11.2MiB/s (11.8MB/s), 11.2MiB/s-11.2MiB/s (11.8MB/s-11.8MB/s), io=11.3MiB (11.8MB), run=1001-1001msec 00:40:42.625 00:40:42.625 Disk stats (read/write): 00:40:42.625 nvme0n1: ios=2349/2560, merge=0/0, ticks=1140/323, in_queue=1463, util=98.50% 00:40:42.625 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:42.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:42.884 rmmod nvme_tcp 00:40:42.884 rmmod nvme_fabrics 00:40:42.884 rmmod nvme_keyring 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1271957 ']' 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1271957 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1271957 ']' 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1271957 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1271957 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1271957' 00:40:42.884 killing process with pid 1271957 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1271957 00:40:42.884 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1271957 00:40:43.144 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:43.144 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:43.144 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:43.144 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:43.144 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:43.144 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:43.144 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:43.144 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:43.144 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:43.144 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:43.144 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:43.144 03:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:45.680 00:40:45.680 real 0m13.008s 00:40:45.680 user 0m24.177s 00:40:45.680 sys 0m6.115s 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:45.680 ************************************ 00:40:45.680 END TEST nvmf_nmic 00:40:45.680 ************************************ 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:45.680 ************************************ 00:40:45.680 START TEST nvmf_fio_target 00:40:45.680 ************************************ 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:45.680 * Looking for test storage... 00:40:45.680 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:45.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.680 --rc genhtml_branch_coverage=1 00:40:45.680 --rc genhtml_function_coverage=1 00:40:45.680 --rc genhtml_legend=1 00:40:45.680 --rc geninfo_all_blocks=1 00:40:45.680 --rc geninfo_unexecuted_blocks=1 00:40:45.680 00:40:45.680 ' 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:45.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.680 --rc genhtml_branch_coverage=1 00:40:45.680 --rc genhtml_function_coverage=1 00:40:45.680 --rc genhtml_legend=1 00:40:45.680 --rc geninfo_all_blocks=1 00:40:45.680 --rc geninfo_unexecuted_blocks=1 00:40:45.680 00:40:45.680 ' 00:40:45.680 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:45.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.680 --rc genhtml_branch_coverage=1 00:40:45.680 --rc genhtml_function_coverage=1 00:40:45.680 --rc genhtml_legend=1 00:40:45.680 --rc geninfo_all_blocks=1 00:40:45.681 --rc geninfo_unexecuted_blocks=1 00:40:45.681 00:40:45.681 ' 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:45.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:45.681 --rc genhtml_branch_coverage=1 00:40:45.681 --rc genhtml_function_coverage=1 00:40:45.681 --rc genhtml_legend=1 00:40:45.681 --rc geninfo_all_blocks=1 00:40:45.681 --rc geninfo_unexecuted_blocks=1 00:40:45.681 00:40:45.681 ' 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:45.681 03:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:45.681 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:45.682 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:45.682 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:45.682 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:45.682 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:45.682 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:45.682 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:45.682 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:45.682 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:45.682 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:45.682 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:45.682 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:45.682 03:02:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:50.962 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:50.963 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:50.963 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:50.963 Found net devices under 0000:af:00.0: cvl_0_0 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:50.963 Found net devices under 0000:af:00.1: cvl_0_1 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:50.963 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:51.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:51.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:40:51.224 00:40:51.224 --- 10.0.0.2 ping statistics --- 00:40:51.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:51.224 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:51.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:51.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:40:51.224 00:40:51.224 --- 10.0.0.1 ping statistics --- 00:40:51.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:51.224 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:51.224 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:51.484 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1276252 00:40:51.484 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1276252 00:40:51.484 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:51.484 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1276252 ']' 00:40:51.484 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:51.484 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:51.484 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:51.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:51.484 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:51.484 03:02:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:51.484 [2024-12-16 03:02:21.936213] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:51.484 [2024-12-16 03:02:21.937197] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:51.484 [2024-12-16 03:02:21.937236] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:51.484 [2024-12-16 03:02:22.015169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:51.484 [2024-12-16 03:02:22.038268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:51.484 [2024-12-16 03:02:22.038305] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:51.484 [2024-12-16 03:02:22.038312] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:51.484 [2024-12-16 03:02:22.038318] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:51.484 [2024-12-16 03:02:22.038324] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:51.484 [2024-12-16 03:02:22.039748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:51.484 [2024-12-16 03:02:22.039867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:51.484 [2024-12-16 03:02:22.039982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:51.484 [2024-12-16 03:02:22.039983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:51.484 [2024-12-16 03:02:22.104335] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:51.484 [2024-12-16 03:02:22.105489] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:51.484 [2024-12-16 03:02:22.105530] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:51.484 [2024-12-16 03:02:22.105892] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:51.484 [2024-12-16 03:02:22.105940] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:51.484 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:51.484 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:40:51.485 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:51.485 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:51.485 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:51.744 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:51.744 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:51.744 [2024-12-16 03:02:22.336732] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:51.744 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:52.004 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:52.004 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:52.263 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:52.263 03:02:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:52.522 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:52.522 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:52.781 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:52.781 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:53.041 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:53.041 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:53.041 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:53.300 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:53.300 03:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:53.559 03:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:53.559 03:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:53.819 03:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:53.819 03:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:53.819 03:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:54.078 03:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:54.078 03:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:54.337 03:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:54.337 [2024-12-16 03:02:24.960613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:54.337 03:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:54.596 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:54.857 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:55.116 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:55.116 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:40:55.116 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:55.116 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:40:55.116 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:40:55.116 03:02:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:57.024 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:57.024 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:57.024 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:57.024 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:57.024 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:57.024 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:57.024 03:02:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:57.024 [global] 00:40:57.024 thread=1 00:40:57.024 invalidate=1 00:40:57.024 rw=write 00:40:57.024 time_based=1 00:40:57.024 runtime=1 00:40:57.024 ioengine=libaio 00:40:57.024 direct=1 00:40:57.024 bs=4096 00:40:57.024 iodepth=1 00:40:57.024 norandommap=0 00:40:57.024 numjobs=1 00:40:57.024 00:40:57.024 verify_dump=1 00:40:57.024 verify_backlog=512 00:40:57.024 verify_state_save=0 00:40:57.024 do_verify=1 00:40:57.024 verify=crc32c-intel 00:40:57.024 [job0] 00:40:57.024 filename=/dev/nvme0n1 00:40:57.024 [job1] 00:40:57.024 filename=/dev/nvme0n2 00:40:57.024 [job2] 00:40:57.024 filename=/dev/nvme0n3 00:40:57.024 [job3] 00:40:57.024 filename=/dev/nvme0n4 00:40:57.282 Could not set queue depth (nvme0n1) 00:40:57.282 Could not set queue depth (nvme0n2) 00:40:57.282 Could not set queue depth (nvme0n3) 00:40:57.282 Could not set queue depth (nvme0n4) 00:40:57.541 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:57.541 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:57.541 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:57.541 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:57.541 fio-3.35 00:40:57.541 Starting 4 threads 00:40:58.920 00:40:58.920 job0: (groupid=0, jobs=1): err= 0: pid=1277340: Mon Dec 16 03:02:29 2024 00:40:58.920 read: IOPS=748, BW=2993KiB/s (3065kB/s)(2996KiB/1001msec) 00:40:58.920 slat (nsec): min=6919, max=24225, avg=8018.47, stdev=2296.91 00:40:58.920 clat (usec): min=187, max=41954, avg=1052.16, stdev=5713.66 00:40:58.920 lat (usec): min=194, max=41977, avg=1060.18, stdev=5715.46 00:40:58.920 clat percentiles (usec): 00:40:58.920 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:40:58.920 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 245], 00:40:58.920 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 260], 00:40:58.920 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:40:58.920 | 99.99th=[42206] 00:40:58.920 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:40:58.920 slat (nsec): min=9786, max=38552, avg=10915.13, stdev=1571.16 00:40:58.920 clat (usec): min=123, max=470, avg=185.31, stdev=51.33 00:40:58.920 lat (usec): min=133, max=509, avg=196.22, stdev=51.72 00:40:58.920 clat percentiles (usec): 00:40:58.920 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:40:58.920 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 163], 60.00th=[ 186], 00:40:58.920 | 70.00th=[ 206], 80.00th=[ 237], 90.00th=[ 258], 95.00th=[ 285], 00:40:58.920 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 375], 99.95th=[ 469], 00:40:58.920 | 99.99th=[ 469] 00:40:58.920 bw ( KiB/s): min= 4096, max= 4096, per=22.93%, avg=4096.00, stdev= 0.00, samples=1 00:40:58.920 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:58.920 lat (usec) : 250=84.15%, 500=15.00% 00:40:58.920 lat (msec) : 50=0.85% 00:40:58.920 cpu : usr=1.70%, sys=2.50%, ctx=1773, majf=0, minf=1 00:40:58.920 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:58.920 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:58.920 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:58.920 issued rwts: total=749,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:58.920 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:58.920 job1: (groupid=0, jobs=1): err= 0: pid=1277341: Mon Dec 16 03:02:29 2024 00:40:58.920 read: IOPS=888, BW=3556KiB/s (3641kB/s)(3584KiB/1008msec) 00:40:58.920 slat (nsec): min=6145, max=24508, avg=7320.37, stdev=1763.93 00:40:58.920 clat (usec): min=168, max=41854, avg=892.03, stdev=5156.91 00:40:58.920 lat (usec): min=175, max=41861, avg=899.35, stdev=5157.54 00:40:58.920 clat percentiles (usec): 00:40:58.921 | 1.00th=[ 174], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 202], 00:40:58.921 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 225], 00:40:58.921 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 253], 95.00th=[ 269], 00:40:58.921 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:40:58.921 | 99.99th=[41681] 00:40:58.921 write: IOPS=1015, BW=4063KiB/s (4161kB/s)(4096KiB/1008msec); 0 zone resets 00:40:58.921 slat (nsec): min=9348, max=34995, avg=10273.38, stdev=1264.21 00:40:58.921 clat (usec): min=123, max=443, avg=182.06, stdev=43.18 00:40:58.921 lat (usec): min=133, max=454, avg=192.33, stdev=43.44 00:40:58.921 clat percentiles (usec): 00:40:58.921 | 1.00th=[ 127], 5.00th=[ 129], 10.00th=[ 130], 20.00th=[ 133], 00:40:58.921 | 30.00th=[ 145], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 188], 00:40:58.921 | 70.00th=[ 210], 80.00th=[ 223], 90.00th=[ 237], 95.00th=[ 249], 00:40:58.921 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 408], 99.95th=[ 445], 00:40:58.921 | 99.99th=[ 445] 00:40:58.921 bw ( KiB/s): min= 4096, max= 4096, per=22.93%, avg=4096.00, stdev= 0.00, samples=2 00:40:58.921 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:40:58.921 lat (usec) : 250=92.34%, 500=6.82%, 750=0.05% 00:40:58.921 lat (msec) : 50=0.78% 00:40:58.921 cpu : usr=1.09%, sys=1.59%, ctx=1920, majf=0, minf=1 00:40:58.921 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:58.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:58.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:58.921 issued rwts: total=896,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:58.921 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:58.921 job2: (groupid=0, jobs=1): err= 0: pid=1277342: Mon Dec 16 03:02:29 2024 00:40:58.921 read: IOPS=1519, BW=6077KiB/s (6222kB/s)(6192KiB/1019msec) 00:40:58.921 slat (nsec): min=6983, max=23558, avg=8087.76, stdev=1464.65 00:40:58.921 clat (usec): min=173, max=41987, avg=427.86, stdev=2933.90 00:40:58.921 lat (usec): min=181, max=41996, avg=435.95, stdev=2934.12 00:40:58.921 clat percentiles (usec): 00:40:58.921 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:40:58.921 | 30.00th=[ 196], 40.00th=[ 206], 50.00th=[ 217], 60.00th=[ 227], 00:40:58.921 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 255], 00:40:58.921 | 99.00th=[ 285], 99.50th=[40633], 99.90th=[41681], 99.95th=[42206], 00:40:58.921 | 99.99th=[42206] 00:40:58.921 write: IOPS=2009, BW=8039KiB/s (8232kB/s)(8192KiB/1019msec); 0 zone resets 00:40:58.921 slat (nsec): min=9809, max=42392, avg=11191.37, stdev=2038.19 00:40:58.921 clat (usec): min=115, max=413, avg=151.46, stdev=17.07 00:40:58.921 lat (usec): min=142, max=450, avg=162.66, stdev=17.59 00:40:58.921 clat percentiles (usec): 00:40:58.921 | 1.00th=[ 135], 5.00th=[ 137], 10.00th=[ 137], 20.00th=[ 139], 00:40:58.921 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 145], 60.00th=[ 149], 00:40:58.921 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 180], 00:40:58.921 | 99.00th=[ 190], 99.50th=[ 200], 99.90th=[ 293], 99.95th=[ 297], 00:40:58.921 | 99.99th=[ 416] 00:40:58.921 bw ( KiB/s): min= 4096, max=12288, per=45.87%, avg=8192.00, stdev=5792.62, samples=2 00:40:58.921 iops : min= 1024, max= 3072, avg=2048.00, stdev=1448.15, samples=2 00:40:58.921 lat (usec) : 250=96.41%, 500=3.36% 00:40:58.921 lat (msec) : 50=0.22% 00:40:58.921 cpu : usr=3.54%, sys=4.91%, ctx=3596, majf=0, minf=1 00:40:58.921 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:58.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:58.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:58.921 issued rwts: total=1548,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:58.921 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:58.921 job3: (groupid=0, jobs=1): err= 0: pid=1277343: Mon Dec 16 03:02:29 2024 00:40:58.921 read: IOPS=30, BW=120KiB/s (123kB/s)(124KiB/1032msec) 00:40:58.921 slat (nsec): min=7588, max=33390, avg=17427.71, stdev=6877.77 00:40:58.921 clat (usec): min=295, max=41027, avg=28713.17, stdev=18618.73 00:40:58.921 lat (usec): min=304, max=41049, avg=28730.59, stdev=18622.57 00:40:58.921 clat percentiles (usec): 00:40:58.921 | 1.00th=[ 297], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 330], 00:40:58.921 | 30.00th=[27132], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:58.921 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:58.921 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:58.921 | 99.99th=[41157] 00:40:58.921 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:40:58.921 slat (nsec): min=10105, max=39628, avg=11703.95, stdev=2178.62 00:40:58.921 clat (usec): min=157, max=458, avg=259.94, stdev=62.28 00:40:58.921 lat (usec): min=170, max=493, avg=271.65, stdev=62.55 00:40:58.921 clat percentiles (usec): 00:40:58.921 | 1.00th=[ 165], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 202], 00:40:58.921 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 249], 60.00th=[ 262], 00:40:58.921 | 70.00th=[ 273], 80.00th=[ 314], 90.00th=[ 347], 95.00th=[ 396], 00:40:58.921 | 99.00th=[ 420], 99.50th=[ 445], 99.90th=[ 457], 99.95th=[ 457], 00:40:58.921 | 99.99th=[ 457] 00:40:58.921 bw ( KiB/s): min= 4096, max= 4096, per=22.93%, avg=4096.00, stdev= 0.00, samples=1 00:40:58.921 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:58.921 lat (usec) : 250=47.33%, 500=48.62% 00:40:58.921 lat (msec) : 50=4.05% 00:40:58.921 cpu : usr=0.48%, sys=0.78%, ctx=543, majf=0, minf=1 00:40:58.921 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:58.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:58.921 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:58.921 issued rwts: total=31,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:58.921 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:58.921 00:40:58.921 Run status group 0 (all jobs): 00:40:58.921 READ: bw=12.2MiB/s (12.8MB/s), 120KiB/s-6077KiB/s (123kB/s-6222kB/s), io=12.6MiB (13.2MB), run=1001-1032msec 00:40:58.921 WRITE: bw=17.4MiB/s (18.3MB/s), 1984KiB/s-8039KiB/s (2032kB/s-8232kB/s), io=18.0MiB (18.9MB), run=1001-1032msec 00:40:58.921 00:40:58.921 Disk stats (read/write): 00:40:58.921 nvme0n1: ios=507/512, merge=0/0, ticks=743/109, in_queue=852, util=86.57% 00:40:58.921 nvme0n2: ios=911/1024, merge=0/0, ticks=628/180, in_queue=808, util=87.28% 00:40:58.921 nvme0n3: ios=1539/2048, merge=0/0, ticks=443/273, in_queue=716, util=88.95% 00:40:58.921 nvme0n4: ios=26/512, merge=0/0, ticks=686/124, in_queue=810, util=89.60% 00:40:58.921 03:02:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:58.921 [global] 00:40:58.921 thread=1 00:40:58.921 invalidate=1 00:40:58.921 rw=randwrite 00:40:58.921 time_based=1 00:40:58.921 runtime=1 00:40:58.921 ioengine=libaio 00:40:58.921 direct=1 00:40:58.921 bs=4096 00:40:58.921 iodepth=1 00:40:58.921 norandommap=0 00:40:58.921 numjobs=1 00:40:58.921 00:40:58.921 verify_dump=1 00:40:58.921 verify_backlog=512 00:40:58.921 verify_state_save=0 00:40:58.921 do_verify=1 00:40:58.921 verify=crc32c-intel 00:40:58.921 [job0] 00:40:58.921 filename=/dev/nvme0n1 00:40:58.921 [job1] 00:40:58.921 filename=/dev/nvme0n2 00:40:58.921 [job2] 00:40:58.921 filename=/dev/nvme0n3 00:40:58.921 [job3] 00:40:58.921 filename=/dev/nvme0n4 00:40:58.921 Could not set queue depth (nvme0n1) 00:40:58.921 Could not set queue depth (nvme0n2) 00:40:58.921 Could not set queue depth (nvme0n3) 00:40:58.921 Could not set queue depth (nvme0n4) 00:40:58.921 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:58.921 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:58.921 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:58.921 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:58.921 fio-3.35 00:40:58.921 Starting 4 threads 00:41:00.301 00:41:00.301 job0: (groupid=0, jobs=1): err= 0: pid=1277708: Mon Dec 16 03:02:30 2024 00:41:00.301 read: IOPS=27, BW=112KiB/s (114kB/s)(112KiB/1003msec) 00:41:00.301 slat (nsec): min=8218, max=25352, avg=19561.96, stdev=6142.16 00:41:00.301 clat (usec): min=218, max=41427, avg=32231.30, stdev=17012.39 00:41:00.301 lat (usec): min=227, max=41436, avg=32250.86, stdev=17012.87 00:41:00.301 clat percentiles (usec): 00:41:00.301 | 1.00th=[ 219], 5.00th=[ 221], 10.00th=[ 245], 20.00th=[ 273], 00:41:00.301 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:00.301 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:00.301 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:00.301 | 99.99th=[41681] 00:41:00.301 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:41:00.301 slat (nsec): min=9921, max=60559, avg=11875.41, stdev=3090.08 00:41:00.301 clat (usec): min=154, max=300, avg=179.02, stdev=14.40 00:41:00.301 lat (usec): min=164, max=360, avg=190.89, stdev=15.54 00:41:00.301 clat percentiles (usec): 00:41:00.301 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 169], 00:41:00.301 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:41:00.301 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 202], 00:41:00.301 | 99.00th=[ 225], 99.50th=[ 247], 99.90th=[ 302], 99.95th=[ 302], 00:41:00.301 | 99.99th=[ 302] 00:41:00.301 bw ( KiB/s): min= 4087, max= 4087, per=51.74%, avg=4087.00, stdev= 0.00, samples=1 00:41:00.301 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:41:00.301 lat (usec) : 250=95.19%, 500=0.74% 00:41:00.301 lat (msec) : 50=4.07% 00:41:00.301 cpu : usr=0.70%, sys=0.60%, ctx=541, majf=0, minf=1 00:41:00.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:00.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.301 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:00.301 job1: (groupid=0, jobs=1): err= 0: pid=1277709: Mon Dec 16 03:02:30 2024 00:41:00.301 read: IOPS=34, BW=139KiB/s (142kB/s)(144KiB/1037msec) 00:41:00.301 slat (nsec): min=6837, max=23763, avg=17520.33, stdev=7239.06 00:41:00.301 clat (usec): min=197, max=41277, avg=26187.18, stdev=19795.75 00:41:00.301 lat (usec): min=204, max=41284, avg=26204.70, stdev=19794.10 00:41:00.301 clat percentiles (usec): 00:41:00.301 | 1.00th=[ 198], 5.00th=[ 198], 10.00th=[ 229], 20.00th=[ 233], 00:41:00.301 | 30.00th=[ 235], 40.00th=[40633], 50.00th=[40633], 60.00th=[40633], 00:41:00.301 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:00.301 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:00.301 | 99.99th=[41157] 00:41:00.301 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:41:00.301 slat (nsec): min=9184, max=35931, avg=10308.42, stdev=1375.53 00:41:00.301 clat (usec): min=144, max=281, avg=168.82, stdev=13.26 00:41:00.301 lat (usec): min=154, max=317, avg=179.13, stdev=13.75 00:41:00.301 clat percentiles (usec): 00:41:00.301 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:41:00.301 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 167], 60.00th=[ 169], 00:41:00.301 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 182], 95.00th=[ 190], 00:41:00.301 | 99.00th=[ 223], 99.50th=[ 237], 99.90th=[ 281], 99.95th=[ 281], 00:41:00.301 | 99.99th=[ 281] 00:41:00.301 bw ( KiB/s): min= 4087, max= 4087, per=51.74%, avg=4087.00, stdev= 0.00, samples=1 00:41:00.301 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:41:00.301 lat (usec) : 250=95.44%, 500=0.36% 00:41:00.301 lat (msec) : 50=4.20% 00:41:00.301 cpu : usr=0.19%, sys=0.58%, ctx=550, majf=0, minf=1 00:41:00.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:00.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.301 issued rwts: total=36,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:00.301 job2: (groupid=0, jobs=1): err= 0: pid=1277710: Mon Dec 16 03:02:30 2024 00:41:00.301 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:41:00.301 slat (nsec): min=10253, max=30268, avg=23155.09, stdev=3445.14 00:41:00.301 clat (usec): min=40499, max=42986, avg=41085.49, stdev=502.32 00:41:00.301 lat (usec): min=40510, max=43017, avg=41108.65, stdev=504.29 00:41:00.301 clat percentiles (usec): 00:41:00.301 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:41:00.301 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:00.301 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:41:00.301 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:41:00.301 | 99.99th=[42730] 00:41:00.301 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:41:00.301 slat (nsec): min=10379, max=41924, avg=12053.52, stdev=2235.00 00:41:00.301 clat (usec): min=154, max=242, avg=177.79, stdev=11.85 00:41:00.301 lat (usec): min=165, max=284, avg=189.84, stdev=12.41 00:41:00.301 clat percentiles (usec): 00:41:00.301 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:41:00.301 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:41:00.301 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 200], 00:41:00.301 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 243], 99.95th=[ 243], 00:41:00.301 | 99.99th=[ 243] 00:41:00.301 bw ( KiB/s): min= 4087, max= 4087, per=51.74%, avg=4087.00, stdev= 0.00, samples=1 00:41:00.301 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:41:00.301 lat (usec) : 250=95.88% 00:41:00.301 lat (msec) : 50=4.12% 00:41:00.301 cpu : usr=0.60%, sys=0.80%, ctx=537, majf=0, minf=1 00:41:00.301 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:00.301 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.301 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.301 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.301 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:00.301 job3: (groupid=0, jobs=1): err= 0: pid=1277711: Mon Dec 16 03:02:30 2024 00:41:00.301 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:41:00.301 slat (nsec): min=9858, max=27198, avg=23392.09, stdev=3448.54 00:41:00.301 clat (usec): min=40478, max=41166, avg=40944.62, stdev=120.43 00:41:00.301 lat (usec): min=40487, max=41185, avg=40968.01, stdev=122.68 00:41:00.301 clat percentiles (usec): 00:41:00.301 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:41:00.301 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:00.301 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:00.301 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:00.301 | 99.99th=[41157] 00:41:00.301 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:41:00.301 slat (nsec): min=10359, max=37062, avg=11995.66, stdev=2377.11 00:41:00.301 clat (usec): min=134, max=487, avg=180.05, stdev=20.27 00:41:00.301 lat (usec): min=145, max=524, avg=192.04, stdev=21.42 00:41:00.301 clat percentiles (usec): 00:41:00.302 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:41:00.302 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 182], 00:41:00.302 | 70.00th=[ 186], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:41:00.302 | 99.00th=[ 227], 99.50th=[ 235], 99.90th=[ 490], 99.95th=[ 490], 00:41:00.302 | 99.99th=[ 490] 00:41:00.302 bw ( KiB/s): min= 4096, max= 4096, per=51.85%, avg=4096.00, stdev= 0.00, samples=1 00:41:00.302 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:00.302 lat (usec) : 250=95.69%, 500=0.19% 00:41:00.302 lat (msec) : 50=4.12% 00:41:00.302 cpu : usr=0.30%, sys=1.00%, ctx=535, majf=0, minf=1 00:41:00.302 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:00.302 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.302 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:00.302 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:00.302 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:00.302 00:41:00.302 Run status group 0 (all jobs): 00:41:00.302 READ: bw=417KiB/s (427kB/s), 87.6KiB/s-139KiB/s (89.8kB/s-142kB/s), io=432KiB (442kB), run=1002-1037msec 00:41:00.302 WRITE: bw=7900KiB/s (8089kB/s), 1975KiB/s-2044KiB/s (2022kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1037msec 00:41:00.302 00:41:00.302 Disk stats (read/write): 00:41:00.302 nvme0n1: ios=68/512, merge=0/0, ticks=759/86, in_queue=845, util=85.97% 00:41:00.302 nvme0n2: ios=68/512, merge=0/0, ticks=1253/82, in_queue=1335, util=97.24% 00:41:00.302 nvme0n3: ios=61/512, merge=0/0, ticks=1703/76, in_queue=1779, util=96.96% 00:41:00.302 nvme0n4: ios=54/512, merge=0/0, ticks=1129/85, in_queue=1214, util=97.25% 00:41:00.302 03:02:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:00.302 [global] 00:41:00.302 thread=1 00:41:00.302 invalidate=1 00:41:00.302 rw=write 00:41:00.302 time_based=1 00:41:00.302 runtime=1 00:41:00.302 ioengine=libaio 00:41:00.302 direct=1 00:41:00.302 bs=4096 00:41:00.302 iodepth=128 00:41:00.302 norandommap=0 00:41:00.302 numjobs=1 00:41:00.302 00:41:00.302 verify_dump=1 00:41:00.302 verify_backlog=512 00:41:00.302 verify_state_save=0 00:41:00.302 do_verify=1 00:41:00.302 verify=crc32c-intel 00:41:00.302 [job0] 00:41:00.302 filename=/dev/nvme0n1 00:41:00.302 [job1] 00:41:00.302 filename=/dev/nvme0n2 00:41:00.302 [job2] 00:41:00.302 filename=/dev/nvme0n3 00:41:00.302 [job3] 00:41:00.302 filename=/dev/nvme0n4 00:41:00.302 Could not set queue depth (nvme0n1) 00:41:00.302 Could not set queue depth (nvme0n2) 00:41:00.302 Could not set queue depth (nvme0n3) 00:41:00.302 Could not set queue depth (nvme0n4) 00:41:00.561 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:00.561 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:00.561 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:00.561 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:00.561 fio-3.35 00:41:00.561 Starting 4 threads 00:41:01.942 00:41:01.942 job0: (groupid=0, jobs=1): err= 0: pid=1278078: Mon Dec 16 03:02:32 2024 00:41:01.942 read: IOPS=3160, BW=12.3MiB/s (12.9MB/s)(12.9MiB/1043msec) 00:41:01.942 slat (nsec): min=1338, max=23565k, avg=146863.59, stdev=1061058.74 00:41:01.942 clat (usec): min=7848, max=79809, avg=19096.95, stdev=15918.17 00:41:01.942 lat (usec): min=8009, max=79816, avg=19243.81, stdev=16005.34 00:41:01.942 clat percentiles (usec): 00:41:01.942 | 1.00th=[ 8455], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10290], 00:41:01.942 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11863], 60.00th=[12125], 00:41:01.942 | 70.00th=[15008], 80.00th=[26084], 90.00th=[45351], 95.00th=[58983], 00:41:01.942 | 99.00th=[77071], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:41:01.942 | 99.99th=[80217] 00:41:01.942 write: IOPS=3436, BW=13.4MiB/s (14.1MB/s)(14.0MiB/1043msec); 0 zone resets 00:41:01.942 slat (usec): min=2, max=21658, avg=137.83, stdev=1024.95 00:41:01.942 clat (usec): min=7409, max=72264, avg=18598.18, stdev=16877.76 00:41:01.942 lat (usec): min=7867, max=72275, avg=18736.01, stdev=16965.99 00:41:01.942 clat percentiles (usec): 00:41:01.942 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9765], 00:41:01.942 | 30.00th=[10028], 40.00th=[10290], 50.00th=[11469], 60.00th=[11863], 00:41:01.942 | 70.00th=[12256], 80.00th=[18482], 90.00th=[52167], 95.00th=[62129], 00:41:01.942 | 99.00th=[69731], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:41:01.942 | 99.99th=[71828] 00:41:01.942 bw ( KiB/s): min= 8192, max=20480, per=22.12%, avg=14336.00, stdev=8688.93, samples=2 00:41:01.942 iops : min= 2048, max= 5120, avg=3584.00, stdev=2172.23, samples=2 00:41:01.942 lat (msec) : 10=20.74%, 20=56.92%, 50=13.23%, 100=9.11% 00:41:01.942 cpu : usr=2.21%, sys=4.99%, ctx=360, majf=0, minf=1 00:41:01.942 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:41:01.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:01.942 issued rwts: total=3296,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:01.942 job1: (groupid=0, jobs=1): err= 0: pid=1278079: Mon Dec 16 03:02:32 2024 00:41:01.942 read: IOPS=3560, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec) 00:41:01.942 slat (nsec): min=1009, max=24650k, avg=131612.05, stdev=1007827.43 00:41:01.942 clat (usec): min=3593, max=92662, avg=16745.44, stdev=12397.67 00:41:01.942 lat (usec): min=5297, max=94955, avg=16877.06, stdev=12458.68 00:41:01.942 clat percentiles (usec): 00:41:01.942 | 1.00th=[ 5866], 5.00th=[ 7767], 10.00th=[ 8455], 20.00th=[ 9634], 00:41:01.942 | 30.00th=[10814], 40.00th=[11994], 50.00th=[12518], 60.00th=[13042], 00:41:01.942 | 70.00th=[15139], 80.00th=[21103], 90.00th=[30540], 95.00th=[44827], 00:41:01.942 | 99.00th=[76022], 99.50th=[83362], 99.90th=[88605], 99.95th=[92799], 00:41:01.942 | 99.99th=[92799] 00:41:01.942 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:41:01.942 slat (nsec): min=1765, max=18949k, avg=143115.14, stdev=861388.44 00:41:01.942 clat (usec): min=5573, max=88117, avg=18813.05, stdev=15245.97 00:41:01.942 lat (usec): min=5692, max=88124, avg=18956.17, stdev=15346.14 00:41:01.942 clat percentiles (usec): 00:41:01.942 | 1.00th=[ 6783], 5.00th=[ 8029], 10.00th=[ 8094], 20.00th=[ 9110], 00:41:01.942 | 30.00th=[10159], 40.00th=[11207], 50.00th=[11863], 60.00th=[12780], 00:41:01.942 | 70.00th=[18482], 80.00th=[28443], 90.00th=[40633], 95.00th=[54264], 00:41:01.942 | 99.00th=[77071], 99.50th=[82314], 99.90th=[86508], 99.95th=[88605], 00:41:01.942 | 99.99th=[88605] 00:41:01.942 bw ( KiB/s): min=12288, max=16384, per=22.12%, avg=14336.00, stdev=2896.31, samples=2 00:41:01.942 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:41:01.942 lat (msec) : 4=0.01%, 10=24.94%, 20=50.07%, 50=20.92%, 100=4.06% 00:41:01.942 cpu : usr=2.59%, sys=2.79%, ctx=336, majf=0, minf=2 00:41:01.942 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:41:01.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:01.942 issued rwts: total=3582,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:01.942 job2: (groupid=0, jobs=1): err= 0: pid=1278083: Mon Dec 16 03:02:32 2024 00:41:01.942 read: IOPS=5086, BW=19.9MiB/s (20.8MB/s)(19.9MiB/1003msec) 00:41:01.942 slat (nsec): min=1141, max=10707k, avg=95030.32, stdev=655097.14 00:41:01.942 clat (usec): min=617, max=31680, avg=12261.29, stdev=3472.50 00:41:01.942 lat (usec): min=3874, max=31690, avg=12356.32, stdev=3518.14 00:41:01.942 clat percentiles (usec): 00:41:01.942 | 1.00th=[ 7111], 5.00th=[ 8356], 10.00th=[ 8979], 20.00th=[ 9896], 00:41:01.942 | 30.00th=[10421], 40.00th=[11207], 50.00th=[11600], 60.00th=[12125], 00:41:01.942 | 70.00th=[12911], 80.00th=[14091], 90.00th=[15926], 95.00th=[18220], 00:41:01.942 | 99.00th=[26346], 99.50th=[29230], 99.90th=[31327], 99.95th=[31589], 00:41:01.942 | 99.99th=[31589] 00:41:01.942 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:41:01.942 slat (nsec): min=1983, max=10335k, avg=92595.78, stdev=555274.06 00:41:01.942 clat (usec): min=205, max=43031, avg=12596.35, stdev=5635.37 00:41:01.942 lat (usec): min=219, max=43035, avg=12688.95, stdev=5670.01 00:41:01.942 clat percentiles (usec): 00:41:01.942 | 1.00th=[ 2180], 5.00th=[ 6259], 10.00th=[ 7898], 20.00th=[ 9110], 00:41:01.942 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11338], 60.00th=[11600], 00:41:01.942 | 70.00th=[12518], 80.00th=[14746], 90.00th=[23462], 95.00th=[24511], 00:41:01.942 | 99.00th=[30540], 99.50th=[35914], 99.90th=[39584], 99.95th=[39584], 00:41:01.942 | 99.99th=[43254] 00:41:01.942 bw ( KiB/s): min=20480, max=20480, per=31.61%, avg=20480.00, stdev= 0.00, samples=2 00:41:01.942 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:41:01.942 lat (usec) : 250=0.13%, 500=0.04%, 750=0.01%, 1000=0.17% 00:41:01.942 lat (msec) : 2=0.13%, 4=0.72%, 10=24.35%, 20=66.36%, 50=8.10% 00:41:01.942 cpu : usr=4.79%, sys=5.79%, ctx=461, majf=0, minf=1 00:41:01.942 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:41:01.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.942 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:01.942 issued rwts: total=5102,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.942 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:01.942 job3: (groupid=0, jobs=1): err= 0: pid=1278085: Mon Dec 16 03:02:32 2024 00:41:01.942 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:41:01.942 slat (nsec): min=1280, max=12030k, avg=94720.15, stdev=668316.71 00:41:01.942 clat (usec): min=1630, max=49227, avg=12677.86, stdev=6771.18 00:41:01.942 lat (usec): min=1649, max=49262, avg=12772.58, stdev=6831.11 00:41:01.942 clat percentiles (usec): 00:41:01.942 | 1.00th=[ 2606], 5.00th=[ 4293], 10.00th=[ 7242], 20.00th=[ 9634], 00:41:01.942 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10683], 60.00th=[11207], 00:41:01.942 | 70.00th=[12256], 80.00th=[14877], 90.00th=[20841], 95.00th=[26346], 00:41:01.942 | 99.00th=[40633], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:41:01.942 | 99.99th=[49021] 00:41:01.942 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:41:01.942 slat (usec): min=2, max=21654, avg=125.12, stdev=986.73 00:41:01.942 clat (usec): min=377, max=61220, avg=15730.75, stdev=10023.75 00:41:01.942 lat (usec): min=1150, max=61253, avg=15855.87, stdev=10110.35 00:41:01.942 clat percentiles (usec): 00:41:01.942 | 1.00th=[ 1876], 5.00th=[ 6128], 10.00th=[ 7439], 20.00th=[ 8979], 00:41:01.942 | 30.00th=[ 9241], 40.00th=[10421], 50.00th=[11207], 60.00th=[12911], 00:41:01.942 | 70.00th=[16909], 80.00th=[24249], 90.00th=[30802], 95.00th=[36439], 00:41:01.942 | 99.00th=[47449], 99.50th=[49546], 99.90th=[49546], 99.95th=[57934], 00:41:01.942 | 99.99th=[61080] 00:41:01.942 bw ( KiB/s): min=15360, max=20480, per=27.66%, avg=17920.00, stdev=3620.39, samples=2 00:41:01.942 iops : min= 3840, max= 5120, avg=4480.00, stdev=905.10, samples=2 00:41:01.942 lat (usec) : 500=0.02% 00:41:01.942 lat (msec) : 2=0.67%, 4=2.08%, 10=30.93%, 20=45.80%, 50=20.46% 00:41:01.942 lat (msec) : 100=0.05% 00:41:01.942 cpu : usr=3.98%, sys=4.78%, ctx=291, majf=0, minf=1 00:41:01.943 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:41:01.943 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:01.943 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:01.943 issued rwts: total=4096,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:01.943 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:01.943 00:41:01.943 Run status group 0 (all jobs): 00:41:01.943 READ: bw=60.2MiB/s (63.1MB/s), 12.3MiB/s-19.9MiB/s (12.9MB/s-20.8MB/s), io=62.8MiB (65.8MB), run=1003-1043msec 00:41:01.943 WRITE: bw=63.3MiB/s (66.4MB/s), 13.4MiB/s-19.9MiB/s (14.1MB/s-20.9MB/s), io=66.0MiB (69.2MB), run=1003-1043msec 00:41:01.943 00:41:01.943 Disk stats (read/write): 00:41:01.943 nvme0n1: ios=2540/2560, merge=0/0, ticks=13670/12925, in_queue=26595, util=96.59% 00:41:01.943 nvme0n2: ios=2562/3072, merge=0/0, ticks=12934/17898, in_queue=30832, util=86.90% 00:41:01.943 nvme0n3: ios=4155/4512, merge=0/0, ticks=42194/48401, in_queue=90595, util=100.00% 00:41:01.943 nvme0n4: ios=4120/4096, merge=0/0, ticks=31553/36168, in_queue=67721, util=98.11% 00:41:01.943 03:02:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:01.943 [global] 00:41:01.943 thread=1 00:41:01.943 invalidate=1 00:41:01.943 rw=randwrite 00:41:01.943 time_based=1 00:41:01.943 runtime=1 00:41:01.943 ioengine=libaio 00:41:01.943 direct=1 00:41:01.943 bs=4096 00:41:01.943 iodepth=128 00:41:01.943 norandommap=0 00:41:01.943 numjobs=1 00:41:01.943 00:41:01.943 verify_dump=1 00:41:01.943 verify_backlog=512 00:41:01.943 verify_state_save=0 00:41:01.943 do_verify=1 00:41:01.943 verify=crc32c-intel 00:41:01.943 [job0] 00:41:01.943 filename=/dev/nvme0n1 00:41:01.943 [job1] 00:41:01.943 filename=/dev/nvme0n2 00:41:01.943 [job2] 00:41:01.943 filename=/dev/nvme0n3 00:41:01.943 [job3] 00:41:01.943 filename=/dev/nvme0n4 00:41:01.943 Could not set queue depth (nvme0n1) 00:41:01.943 Could not set queue depth (nvme0n2) 00:41:01.943 Could not set queue depth (nvme0n3) 00:41:01.943 Could not set queue depth (nvme0n4) 00:41:02.202 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:02.202 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:02.202 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:02.202 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:02.202 fio-3.35 00:41:02.202 Starting 4 threads 00:41:03.609 00:41:03.609 job0: (groupid=0, jobs=1): err= 0: pid=1278469: Mon Dec 16 03:02:33 2024 00:41:03.609 read: IOPS=6482, BW=25.3MiB/s (26.6MB/s)(25.5MiB/1006msec) 00:41:03.609 slat (nsec): min=1351, max=9234.0k, avg=81493.19, stdev=666869.69 00:41:03.609 clat (usec): min=1553, max=19068, avg=10246.88, stdev=2449.55 00:41:03.609 lat (usec): min=3137, max=22995, avg=10328.38, stdev=2508.88 00:41:03.609 clat percentiles (usec): 00:41:03.609 | 1.00th=[ 4817], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8717], 00:41:03.609 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[ 9896], 00:41:03.609 | 70.00th=[10290], 80.00th=[11600], 90.00th=[14222], 95.00th=[15664], 00:41:03.609 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19006], 99.95th=[19006], 00:41:03.609 | 99.99th=[19006] 00:41:03.609 write: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec); 0 zone resets 00:41:03.609 slat (usec): min=2, max=8372, avg=65.16, stdev=499.04 00:41:03.609 clat (usec): min=1512, max=18970, avg=9111.07, stdev=2003.21 00:41:03.609 lat (usec): min=1524, max=18977, avg=9176.23, stdev=2048.50 00:41:03.609 clat percentiles (usec): 00:41:03.609 | 1.00th=[ 3621], 5.00th=[ 5800], 10.00th=[ 6325], 20.00th=[ 7898], 00:41:03.609 | 30.00th=[ 8586], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:41:03.609 | 70.00th=[ 9765], 80.00th=[10028], 90.00th=[10421], 95.00th=[13173], 00:41:03.609 | 99.00th=[15139], 99.50th=[16581], 99.90th=[17957], 99.95th=[18482], 00:41:03.609 | 99.99th=[19006] 00:41:03.609 bw ( KiB/s): min=25416, max=27832, per=35.64%, avg=26624.00, stdev=1708.37, samples=2 00:41:03.609 iops : min= 6354, max= 6958, avg=6656.00, stdev=427.09, samples=2 00:41:03.609 lat (msec) : 2=0.07%, 4=0.83%, 10=71.75%, 20=27.35% 00:41:03.609 cpu : usr=4.58%, sys=7.66%, ctx=449, majf=0, minf=1 00:41:03.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:41:03.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:03.609 issued rwts: total=6521,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:03.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:03.610 job1: (groupid=0, jobs=1): err= 0: pid=1278485: Mon Dec 16 03:02:33 2024 00:41:03.610 read: IOPS=2104, BW=8417KiB/s (8619kB/s)(8484KiB/1008msec) 00:41:03.610 slat (nsec): min=1509, max=19139k, avg=158505.28, stdev=1039938.80 00:41:03.610 clat (usec): min=8442, max=77485, avg=18739.48, stdev=13545.37 00:41:03.610 lat (usec): min=8454, max=77494, avg=18897.98, stdev=13676.87 00:41:03.610 clat percentiles (usec): 00:41:03.610 | 1.00th=[ 9372], 5.00th=[10683], 10.00th=[11600], 20.00th=[12256], 00:41:03.610 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13304], 00:41:03.610 | 70.00th=[13960], 80.00th=[19792], 90.00th=[39060], 95.00th=[54264], 00:41:03.610 | 99.00th=[69731], 99.50th=[73925], 99.90th=[77071], 99.95th=[77071], 00:41:03.610 | 99.99th=[77071] 00:41:03.610 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:41:03.610 slat (nsec): min=1925, max=22014k, avg=246481.37, stdev=1306198.21 00:41:03.610 clat (msec): min=5, max=117, avg=34.11, stdev=30.24 00:41:03.610 lat (msec): min=5, max=117, avg=34.36, stdev=30.42 00:41:03.610 clat percentiles (msec): 00:41:03.610 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 14], 00:41:03.610 | 30.00th=[ 19], 40.00th=[ 21], 50.00th=[ 22], 60.00th=[ 22], 00:41:03.610 | 70.00th=[ 29], 80.00th=[ 51], 90.00th=[ 95], 95.00th=[ 109], 00:41:03.610 | 99.00th=[ 116], 99.50th=[ 117], 99.90th=[ 118], 99.95th=[ 118], 00:41:03.610 | 99.99th=[ 118] 00:41:03.610 bw ( KiB/s): min= 9112, max=10928, per=13.41%, avg=10020.00, stdev=1284.11, samples=2 00:41:03.610 iops : min= 2278, max= 2732, avg=2505.00, stdev=321.03, samples=2 00:41:03.610 lat (msec) : 10=7.41%, 20=48.71%, 50=29.80%, 100=9.16%, 250=4.91% 00:41:03.610 cpu : usr=2.58%, sys=2.18%, ctx=278, majf=0, minf=2 00:41:03.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:41:03.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:03.610 issued rwts: total=2121,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:03.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:03.610 job2: (groupid=0, jobs=1): err= 0: pid=1278504: Mon Dec 16 03:02:33 2024 00:41:03.610 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:41:03.610 slat (nsec): min=1432, max=14239k, avg=131493.47, stdev=914195.07 00:41:03.610 clat (usec): min=4121, max=63070, avg=15327.28, stdev=7629.98 00:41:03.610 lat (usec): min=4132, max=63076, avg=15458.78, stdev=7722.07 00:41:03.610 clat percentiles (usec): 00:41:03.610 | 1.00th=[ 8717], 5.00th=[10945], 10.00th=[11338], 20.00th=[11994], 00:41:03.610 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13435], 00:41:03.610 | 70.00th=[14615], 80.00th=[16319], 90.00th=[21627], 95.00th=[28705], 00:41:03.610 | 99.00th=[52167], 99.50th=[57934], 99.90th=[63177], 99.95th=[63177], 00:41:03.610 | 99.99th=[63177] 00:41:03.610 write: IOPS=3875, BW=15.1MiB/s (15.9MB/s)(15.3MiB/1013msec); 0 zone resets 00:41:03.610 slat (usec): min=2, max=11610, avg=129.71, stdev=734.46 00:41:03.610 clat (usec): min=2986, max=63056, avg=18690.38, stdev=10846.12 00:41:03.610 lat (usec): min=2997, max=63060, avg=18820.08, stdev=10908.37 00:41:03.610 clat percentiles (usec): 00:41:03.610 | 1.00th=[ 5407], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[10945], 00:41:03.610 | 30.00th=[11469], 40.00th=[11994], 50.00th=[13304], 60.00th=[19530], 00:41:03.610 | 70.00th=[21103], 80.00th=[22414], 90.00th=[36439], 95.00th=[45876], 00:41:03.610 | 99.00th=[50594], 99.50th=[52691], 99.90th=[57934], 99.95th=[63177], 00:41:03.610 | 99.99th=[63177] 00:41:03.610 bw ( KiB/s): min=14016, max=16368, per=20.33%, avg=15192.00, stdev=1663.12, samples=2 00:41:03.610 iops : min= 3504, max= 4092, avg=3798.00, stdev=415.78, samples=2 00:41:03.610 lat (msec) : 4=0.17%, 10=4.93%, 20=69.08%, 50=24.01%, 100=1.81% 00:41:03.610 cpu : usr=3.66%, sys=4.25%, ctx=323, majf=0, minf=1 00:41:03.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:03.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:03.610 issued rwts: total=3584,3926,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:03.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:03.610 job3: (groupid=0, jobs=1): err= 0: pid=1278516: Mon Dec 16 03:02:33 2024 00:41:03.610 read: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec) 00:41:03.610 slat (nsec): min=1428, max=10360k, avg=94081.96, stdev=769872.25 00:41:03.610 clat (usec): min=3626, max=21640, avg=11769.61, stdev=2747.36 00:41:03.610 lat (usec): min=3634, max=26672, avg=11863.70, stdev=2824.25 00:41:03.610 clat percentiles (usec): 00:41:03.610 | 1.00th=[ 7046], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10159], 00:41:03.610 | 30.00th=[10421], 40.00th=[10814], 50.00th=[10945], 60.00th=[11338], 00:41:03.610 | 70.00th=[11600], 80.00th=[13042], 90.00th=[16057], 95.00th=[18220], 00:41:03.610 | 99.00th=[20055], 99.50th=[20579], 99.90th=[21103], 99.95th=[21365], 00:41:03.610 | 99.99th=[21627] 00:41:03.610 write: IOPS=5738, BW=22.4MiB/s (23.5MB/s)(22.6MiB/1007msec); 0 zone resets 00:41:03.610 slat (usec): min=2, max=9502, avg=75.50, stdev=546.53 00:41:03.610 clat (usec): min=2008, max=20862, avg=10590.00, stdev=2304.23 00:41:03.610 lat (usec): min=2019, max=20866, avg=10665.50, stdev=2344.21 00:41:03.610 clat percentiles (usec): 00:41:03.610 | 1.00th=[ 4047], 5.00th=[ 6783], 10.00th=[ 7308], 20.00th=[ 9110], 00:41:03.610 | 30.00th=[10159], 40.00th=[10552], 50.00th=[10945], 60.00th=[11076], 00:41:03.610 | 70.00th=[11338], 80.00th=[11469], 90.00th=[12125], 95.00th=[15139], 00:41:03.610 | 99.00th=[16909], 99.50th=[16909], 99.90th=[20055], 99.95th=[20579], 00:41:03.610 | 99.99th=[20841] 00:41:03.610 bw ( KiB/s): min=20664, max=24552, per=30.26%, avg=22608.00, stdev=2749.23, samples=2 00:41:03.610 iops : min= 5166, max= 6138, avg=5652.00, stdev=687.31, samples=2 00:41:03.610 lat (msec) : 4=0.56%, 10=22.64%, 20=75.99%, 50=0.82% 00:41:03.610 cpu : usr=4.37%, sys=6.36%, ctx=429, majf=0, minf=1 00:41:03.610 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:41:03.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:03.610 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:03.610 issued rwts: total=5632,5779,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:03.610 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:03.610 00:41:03.610 Run status group 0 (all jobs): 00:41:03.610 READ: bw=68.9MiB/s (72.2MB/s), 8417KiB/s-25.3MiB/s (8619kB/s-26.6MB/s), io=69.8MiB (73.1MB), run=1006-1013msec 00:41:03.610 WRITE: bw=73.0MiB/s (76.5MB/s), 9.92MiB/s-25.8MiB/s (10.4MB/s-27.1MB/s), io=73.9MiB (77.5MB), run=1006-1013msec 00:41:03.610 00:41:03.610 Disk stats (read/write): 00:41:03.610 nvme0n1: ios=5314/5632, merge=0/0, ticks=53182/49638, in_queue=102820, util=99.10% 00:41:03.610 nvme0n2: ios=1647/2048, merge=0/0, ticks=12273/39574, in_queue=51847, util=86.38% 00:41:03.610 nvme0n3: ios=3102/3271, merge=0/0, ticks=46271/57724, in_queue=103995, util=96.24% 00:41:03.610 nvme0n4: ios=4627/4903, merge=0/0, ticks=52993/50040, in_queue=103033, util=98.83% 00:41:03.610 03:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:03.610 03:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1278668 00:41:03.610 03:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:03.610 03:02:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:03.610 [global] 00:41:03.610 thread=1 00:41:03.610 invalidate=1 00:41:03.610 rw=read 00:41:03.610 time_based=1 00:41:03.610 runtime=10 00:41:03.610 ioengine=libaio 00:41:03.610 direct=1 00:41:03.610 bs=4096 00:41:03.610 iodepth=1 00:41:03.610 norandommap=1 00:41:03.610 numjobs=1 00:41:03.610 00:41:03.610 [job0] 00:41:03.610 filename=/dev/nvme0n1 00:41:03.610 [job1] 00:41:03.610 filename=/dev/nvme0n2 00:41:03.610 [job2] 00:41:03.610 filename=/dev/nvme0n3 00:41:03.610 [job3] 00:41:03.610 filename=/dev/nvme0n4 00:41:03.610 Could not set queue depth (nvme0n1) 00:41:03.610 Could not set queue depth (nvme0n2) 00:41:03.610 Could not set queue depth (nvme0n3) 00:41:03.610 Could not set queue depth (nvme0n4) 00:41:03.873 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:03.873 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:03.873 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:03.873 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:03.873 fio-3.35 00:41:03.873 Starting 4 threads 00:41:06.406 03:02:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:06.665 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=15822848, buflen=4096 00:41:06.665 fio: pid=1278950, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:06.665 03:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:06.924 03:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:06.924 03:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:06.924 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=43864064, buflen=4096 00:41:06.924 fio: pid=1278941, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:07.183 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=51953664, buflen=4096 00:41:07.183 fio: pid=1278902, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:07.183 03:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:07.183 03:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:07.443 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=49119232, buflen=4096 00:41:07.443 fio: pid=1278920, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:07.443 03:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:07.443 03:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:41:07.443 00:41:07.443 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1278902: Mon Dec 16 03:02:37 2024 00:41:07.443 read: IOPS=4054, BW=15.8MiB/s (16.6MB/s)(49.5MiB/3129msec) 00:41:07.443 slat (usec): min=5, max=10742, avg= 8.72, stdev=123.06 00:41:07.443 clat (usec): min=165, max=41293, avg=235.62, stdev=366.28 00:41:07.443 lat (usec): min=171, max=41300, avg=244.35, stdev=386.61 00:41:07.443 clat percentiles (usec): 00:41:07.443 | 1.00th=[ 188], 5.00th=[ 204], 10.00th=[ 208], 20.00th=[ 215], 00:41:07.443 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:41:07.443 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 255], 95.00th=[ 269], 00:41:07.443 | 99.00th=[ 355], 99.50th=[ 412], 99.90th=[ 437], 99.95th=[ 469], 00:41:07.443 | 99.99th=[ 2704] 00:41:07.443 bw ( KiB/s): min=13600, max=17280, per=34.77%, avg=16235.00, stdev=1370.84, samples=6 00:41:07.443 iops : min= 3400, max= 4320, avg=4058.67, stdev=342.64, samples=6 00:41:07.443 lat (usec) : 250=85.09%, 500=14.88%, 1000=0.01% 00:41:07.443 lat (msec) : 4=0.01%, 50=0.01% 00:41:07.443 cpu : usr=1.05%, sys=3.84%, ctx=12687, majf=0, minf=2 00:41:07.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:07.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.443 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.443 issued rwts: total=12685,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:07.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:07.443 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1278920: Mon Dec 16 03:02:37 2024 00:41:07.443 read: IOPS=3567, BW=13.9MiB/s (14.6MB/s)(46.8MiB/3362msec) 00:41:07.443 slat (usec): min=6, max=31195, avg=15.38, stdev=341.14 00:41:07.443 clat (usec): min=172, max=41410, avg=260.47, stdev=754.64 00:41:07.443 lat (usec): min=181, max=41418, avg=275.84, stdev=828.97 00:41:07.443 clat percentiles (usec): 00:41:07.443 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 206], 20.00th=[ 219], 00:41:07.443 | 30.00th=[ 235], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:41:07.443 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 285], 00:41:07.443 | 99.00th=[ 314], 99.50th=[ 338], 99.90th=[ 478], 99.95th=[ 8029], 00:41:07.443 | 99.99th=[41157] 00:41:07.443 bw ( KiB/s): min=10936, max=16176, per=30.35%, avg=14174.33, stdev=1757.61, samples=6 00:41:07.443 iops : min= 2734, max= 4044, avg=3543.50, stdev=439.37, samples=6 00:41:07.443 lat (usec) : 250=52.55%, 500=47.36% 00:41:07.443 lat (msec) : 2=0.02%, 4=0.02%, 10=0.02%, 50=0.03% 00:41:07.443 cpu : usr=1.52%, sys=5.21%, ctx=12000, majf=0, minf=1 00:41:07.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:07.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.443 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.443 issued rwts: total=11993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:07.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:07.443 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1278941: Mon Dec 16 03:02:37 2024 00:41:07.443 read: IOPS=3620, BW=14.1MiB/s (14.8MB/s)(41.8MiB/2958msec) 00:41:07.443 slat (nsec): min=5578, max=34984, avg=7734.70, stdev=1406.90 00:41:07.443 clat (usec): min=169, max=41931, avg=265.19, stdev=887.14 00:41:07.443 lat (usec): min=175, max=41954, avg=272.92, stdev=887.33 00:41:07.443 clat percentiles (usec): 00:41:07.443 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:41:07.443 | 30.00th=[ 219], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 239], 00:41:07.443 | 70.00th=[ 262], 80.00th=[ 285], 90.00th=[ 306], 95.00th=[ 318], 00:41:07.443 | 99.00th=[ 343], 99.50th=[ 363], 99.90th=[ 424], 99.95th=[ 3490], 00:41:07.443 | 99.99th=[41157] 00:41:07.443 bw ( KiB/s): min=12824, max=17456, per=33.55%, avg=15667.20, stdev=2151.02, samples=5 00:41:07.443 iops : min= 3206, max= 4364, avg=3916.80, stdev=537.76, samples=5 00:41:07.443 lat (usec) : 250=66.38%, 500=33.54%, 750=0.02% 00:41:07.443 lat (msec) : 4=0.01%, 50=0.05% 00:41:07.443 cpu : usr=1.01%, sys=3.48%, ctx=10710, majf=0, minf=1 00:41:07.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:07.443 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.443 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.443 issued rwts: total=10710,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:07.443 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:07.443 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1278950: Mon Dec 16 03:02:37 2024 00:41:07.443 read: IOPS=1434, BW=5738KiB/s (5876kB/s)(15.1MiB/2693msec) 00:41:07.443 slat (nsec): min=6290, max=35720, avg=9028.93, stdev=2488.63 00:41:07.443 clat (usec): min=205, max=41397, avg=680.62, stdev=3963.97 00:41:07.443 lat (usec): min=212, max=41421, avg=689.65, stdev=3965.40 00:41:07.443 clat percentiles (usec): 00:41:07.443 | 1.00th=[ 229], 5.00th=[ 243], 10.00th=[ 251], 20.00th=[ 265], 00:41:07.443 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 297], 00:41:07.443 | 70.00th=[ 306], 80.00th=[ 318], 90.00th=[ 334], 95.00th=[ 351], 00:41:07.443 | 99.00th=[ 1434], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:07.443 | 99.99th=[41157] 00:41:07.443 bw ( KiB/s): min= 96, max=13040, per=11.71%, avg=5468.80, stdev=6601.40, samples=5 00:41:07.443 iops : min= 24, max= 3260, avg=1367.20, stdev=1650.35, samples=5 00:41:07.443 lat (usec) : 250=9.89%, 500=89.08% 00:41:07.443 lat (msec) : 2=0.05%, 50=0.96% 00:41:07.443 cpu : usr=0.97%, sys=1.89%, ctx=3864, majf=0, minf=1 00:41:07.443 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:07.444 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.444 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:07.444 issued rwts: total=3864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:07.444 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:07.444 00:41:07.444 Run status group 0 (all jobs): 00:41:07.444 READ: bw=45.6MiB/s (47.8MB/s), 5738KiB/s-15.8MiB/s (5876kB/s-16.6MB/s), io=153MiB (161MB), run=2693-3362msec 00:41:07.444 00:41:07.444 Disk stats (read/write): 00:41:07.444 nvme0n1: ios=12416/0, merge=0/0, ticks=2874/0, in_queue=2874, util=93.81% 00:41:07.444 nvme0n2: ios=11919/0, merge=0/0, ticks=3212/0, in_queue=3212, util=97.72% 00:41:07.444 nvme0n3: ios=10705/0, merge=0/0, ticks=2603/0, in_queue=2603, util=96.20% 00:41:07.444 nvme0n4: ios=3463/0, merge=0/0, ticks=2485/0, in_queue=2485, util=96.39% 00:41:07.444 03:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:07.444 03:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:41:07.703 03:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:07.703 03:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:41:07.962 03:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:07.962 03:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:41:08.222 03:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:08.222 03:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:41:08.481 03:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:41:08.481 03:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1278668 00:41:08.481 03:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:41:08.481 03:02:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:08.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:08.481 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:08.481 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:41:08.481 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:08.481 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:08.481 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:08.481 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:08.481 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:41:08.481 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:41:08.481 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:41:08.481 nvmf hotplug test: fio failed as expected 00:41:08.481 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:08.855 rmmod nvme_tcp 00:41:08.855 rmmod nvme_fabrics 00:41:08.855 rmmod nvme_keyring 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1276252 ']' 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1276252 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1276252 ']' 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1276252 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1276252 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1276252' 00:41:08.855 killing process with pid 1276252 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1276252 00:41:08.855 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1276252 00:41:09.225 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:09.226 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:09.226 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:09.226 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:41:09.226 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:41:09.226 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:09.226 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:41:09.226 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:09.226 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:09.226 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:09.226 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:09.226 03:02:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:11.134 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:11.134 00:41:11.134 real 0m25.816s 00:41:11.134 user 1m30.535s 00:41:11.134 sys 0m11.680s 00:41:11.134 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:11.134 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:11.134 ************************************ 00:41:11.134 END TEST nvmf_fio_target 00:41:11.134 ************************************ 00:41:11.134 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:11.134 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:11.134 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:11.134 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:11.134 ************************************ 00:41:11.134 START TEST nvmf_bdevio 00:41:11.134 ************************************ 00:41:11.134 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:11.134 * Looking for test storage... 00:41:11.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:11.134 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:11.134 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:41:11.134 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:41:11.395 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:11.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.396 --rc genhtml_branch_coverage=1 00:41:11.396 --rc genhtml_function_coverage=1 00:41:11.396 --rc genhtml_legend=1 00:41:11.396 --rc geninfo_all_blocks=1 00:41:11.396 --rc geninfo_unexecuted_blocks=1 00:41:11.396 00:41:11.396 ' 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:11.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.396 --rc genhtml_branch_coverage=1 00:41:11.396 --rc genhtml_function_coverage=1 00:41:11.396 --rc genhtml_legend=1 00:41:11.396 --rc geninfo_all_blocks=1 00:41:11.396 --rc geninfo_unexecuted_blocks=1 00:41:11.396 00:41:11.396 ' 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:11.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.396 --rc genhtml_branch_coverage=1 00:41:11.396 --rc genhtml_function_coverage=1 00:41:11.396 --rc genhtml_legend=1 00:41:11.396 --rc geninfo_all_blocks=1 00:41:11.396 --rc geninfo_unexecuted_blocks=1 00:41:11.396 00:41:11.396 ' 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:11.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:11.396 --rc genhtml_branch_coverage=1 00:41:11.396 --rc genhtml_function_coverage=1 00:41:11.396 --rc genhtml_legend=1 00:41:11.396 --rc geninfo_all_blocks=1 00:41:11.396 --rc geninfo_unexecuted_blocks=1 00:41:11.396 00:41:11.396 ' 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:11.396 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:11.397 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:11.397 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:11.397 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:11.397 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:11.397 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:41:11.397 03:02:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:17.982 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:17.983 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:17.983 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:17.983 Found net devices under 0000:af:00.0: cvl_0_0 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:17.983 Found net devices under 0000:af:00.1: cvl_0_1 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:17.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:17.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:41:17.983 00:41:17.983 --- 10.0.0.2 ping statistics --- 00:41:17.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:17.983 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:17.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:17.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:41:17.983 00:41:17.983 --- 10.0.0.1 ping statistics --- 00:41:17.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:17.983 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:17.983 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1283190 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1283190 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1283190 ']' 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:17.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:17.984 03:02:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:17.984 [2024-12-16 03:02:47.810734] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:17.984 [2024-12-16 03:02:47.811730] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:17.984 [2024-12-16 03:02:47.811768] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:17.984 [2024-12-16 03:02:47.891688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:17.984 [2024-12-16 03:02:47.915138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:17.984 [2024-12-16 03:02:47.915176] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:17.984 [2024-12-16 03:02:47.915183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:17.984 [2024-12-16 03:02:47.915190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:17.984 [2024-12-16 03:02:47.915198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:17.984 [2024-12-16 03:02:47.916689] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:41:17.984 [2024-12-16 03:02:47.916798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:41:17.984 [2024-12-16 03:02:47.916907] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:17.984 [2024-12-16 03:02:47.916908] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:41:17.984 [2024-12-16 03:02:47.979778] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:17.984 [2024-12-16 03:02:47.980895] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:17.984 [2024-12-16 03:02:47.981016] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:17.984 [2024-12-16 03:02:47.981406] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:17.984 [2024-12-16 03:02:47.981442] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:17.984 [2024-12-16 03:02:48.045662] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:17.984 Malloc0 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:17.984 [2024-12-16 03:02:48.125929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:17.984 { 00:41:17.984 "params": { 00:41:17.984 "name": "Nvme$subsystem", 00:41:17.984 "trtype": "$TEST_TRANSPORT", 00:41:17.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:17.984 "adrfam": "ipv4", 00:41:17.984 "trsvcid": "$NVMF_PORT", 00:41:17.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:17.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:17.984 "hdgst": ${hdgst:-false}, 00:41:17.984 "ddgst": ${ddgst:-false} 00:41:17.984 }, 00:41:17.984 "method": "bdev_nvme_attach_controller" 00:41:17.984 } 00:41:17.984 EOF 00:41:17.984 )") 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:41:17.984 03:02:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:17.984 "params": { 00:41:17.984 "name": "Nvme1", 00:41:17.984 "trtype": "tcp", 00:41:17.984 "traddr": "10.0.0.2", 00:41:17.984 "adrfam": "ipv4", 00:41:17.984 "trsvcid": "4420", 00:41:17.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:17.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:17.984 "hdgst": false, 00:41:17.984 "ddgst": false 00:41:17.984 }, 00:41:17.984 "method": "bdev_nvme_attach_controller" 00:41:17.984 }' 00:41:17.984 [2024-12-16 03:02:48.177976] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:17.984 [2024-12-16 03:02:48.178022] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283218 ] 00:41:17.984 [2024-12-16 03:02:48.253170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:17.984 [2024-12-16 03:02:48.277702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:17.985 [2024-12-16 03:02:48.277808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:17.985 [2024-12-16 03:02:48.277809] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:17.985 I/O targets: 00:41:17.985 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:17.985 00:41:17.985 00:41:17.985 CUnit - A unit testing framework for C - Version 2.1-3 00:41:17.985 http://cunit.sourceforge.net/ 00:41:17.985 00:41:17.985 00:41:17.985 Suite: bdevio tests on: Nvme1n1 00:41:17.985 Test: blockdev write read block ...passed 00:41:17.985 Test: blockdev write zeroes read block ...passed 00:41:17.985 Test: blockdev write zeroes read no split ...passed 00:41:17.985 Test: blockdev write zeroes read split ...passed 00:41:17.985 Test: blockdev write zeroes read split partial ...passed 00:41:17.985 Test: blockdev reset ...[2024-12-16 03:02:48.573322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:17.985 [2024-12-16 03:02:48.573383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2390340 (9): Bad file descriptor 00:41:17.985 [2024-12-16 03:02:48.576634] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:41:17.985 passed 00:41:17.985 Test: blockdev write read 8 blocks ...passed 00:41:17.985 Test: blockdev write read size > 128k ...passed 00:41:17.985 Test: blockdev write read invalid size ...passed 00:41:18.244 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:18.244 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:18.244 Test: blockdev write read max offset ...passed 00:41:18.244 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:18.244 Test: blockdev writev readv 8 blocks ...passed 00:41:18.244 Test: blockdev writev readv 30 x 1block ...passed 00:41:18.244 Test: blockdev writev readv block ...passed 00:41:18.244 Test: blockdev writev readv size > 128k ...passed 00:41:18.244 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:18.244 Test: blockdev comparev and writev ...[2024-12-16 03:02:48.869112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:18.244 [2024-12-16 03:02:48.869144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:18.244 [2024-12-16 03:02:48.869158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:18.244 [2024-12-16 03:02:48.869166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:18.244 [2024-12-16 03:02:48.869456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:18.244 [2024-12-16 03:02:48.869467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:18.244 [2024-12-16 03:02:48.869480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:18.244 [2024-12-16 03:02:48.869488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:18.244 [2024-12-16 03:02:48.869760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:18.244 [2024-12-16 03:02:48.869770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:18.244 [2024-12-16 03:02:48.869783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:18.244 [2024-12-16 03:02:48.869791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:18.244 [2024-12-16 03:02:48.870089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:18.244 [2024-12-16 03:02:48.870102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:18.244 [2024-12-16 03:02:48.870113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:18.244 [2024-12-16 03:02:48.870121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:18.504 passed 00:41:18.504 Test: blockdev nvme passthru rw ...passed 00:41:18.504 Test: blockdev nvme passthru vendor specific ...[2024-12-16 03:02:48.952152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:18.504 [2024-12-16 03:02:48.952169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:18.504 [2024-12-16 03:02:48.952277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:18.504 [2024-12-16 03:02:48.952288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:18.504 [2024-12-16 03:02:48.952407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:18.504 [2024-12-16 03:02:48.952421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:18.504 [2024-12-16 03:02:48.952532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:18.504 [2024-12-16 03:02:48.952542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:18.504 passed 00:41:18.504 Test: blockdev nvme admin passthru ...passed 00:41:18.504 Test: blockdev copy ...passed 00:41:18.504 00:41:18.504 Run Summary: Type Total Ran Passed Failed Inactive 00:41:18.504 suites 1 1 n/a 0 0 00:41:18.504 tests 23 23 23 0 0 00:41:18.504 asserts 152 152 152 0 n/a 00:41:18.504 00:41:18.504 Elapsed time = 1.171 seconds 00:41:18.504 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:18.504 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:18.504 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:18.504 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:18.504 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:18.504 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:41:18.504 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:18.504 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:41:18.504 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:18.504 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:41:18.504 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:18.504 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:18.504 rmmod nvme_tcp 00:41:18.763 rmmod nvme_fabrics 00:41:18.763 rmmod nvme_keyring 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1283190 ']' 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1283190 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1283190 ']' 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1283190 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1283190 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1283190' 00:41:18.763 killing process with pid 1283190 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1283190 00:41:18.763 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1283190 00:41:19.023 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:19.023 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:19.023 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:19.023 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:41:19.023 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:41:19.023 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:19.023 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:41:19.023 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:19.023 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:19.023 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:19.023 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:19.023 03:02:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:20.931 03:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:20.931 00:41:20.931 real 0m9.821s 00:41:20.931 user 0m8.277s 00:41:20.931 sys 0m5.246s 00:41:20.931 03:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:20.931 03:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:20.931 ************************************ 00:41:20.931 END TEST nvmf_bdevio 00:41:20.931 ************************************ 00:41:20.931 03:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:41:20.931 00:41:20.931 real 4m30.233s 00:41:20.931 user 9m2.341s 00:41:20.931 sys 1m50.651s 00:41:20.931 03:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:20.931 03:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:20.931 ************************************ 00:41:20.931 END TEST nvmf_target_core_interrupt_mode 00:41:20.931 ************************************ 00:41:20.931 03:02:51 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:20.931 03:02:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:20.931 03:02:51 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:20.931 03:02:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:21.191 ************************************ 00:41:21.191 START TEST nvmf_interrupt 00:41:21.191 ************************************ 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:21.191 * Looking for test storage... 00:41:21.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:21.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.191 --rc genhtml_branch_coverage=1 00:41:21.191 --rc genhtml_function_coverage=1 00:41:21.191 --rc genhtml_legend=1 00:41:21.191 --rc geninfo_all_blocks=1 00:41:21.191 --rc geninfo_unexecuted_blocks=1 00:41:21.191 00:41:21.191 ' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:21.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.191 --rc genhtml_branch_coverage=1 00:41:21.191 --rc genhtml_function_coverage=1 00:41:21.191 --rc genhtml_legend=1 00:41:21.191 --rc geninfo_all_blocks=1 00:41:21.191 --rc geninfo_unexecuted_blocks=1 00:41:21.191 00:41:21.191 ' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:21.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.191 --rc genhtml_branch_coverage=1 00:41:21.191 --rc genhtml_function_coverage=1 00:41:21.191 --rc genhtml_legend=1 00:41:21.191 --rc geninfo_all_blocks=1 00:41:21.191 --rc geninfo_unexecuted_blocks=1 00:41:21.191 00:41:21.191 ' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:21.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.191 --rc genhtml_branch_coverage=1 00:41:21.191 --rc genhtml_function_coverage=1 00:41:21.191 --rc genhtml_legend=1 00:41:21.191 --rc geninfo_all_blocks=1 00:41:21.191 --rc geninfo_unexecuted_blocks=1 00:41:21.191 00:41:21.191 ' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:41:21.191 03:02:51 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:27.764 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:27.765 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:27.765 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:27.765 Found net devices under 0000:af:00.0: cvl_0_0 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:27.765 Found net devices under 0000:af:00.1: cvl_0_1 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:27.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:27.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:41:27.765 00:41:27.765 --- 10.0.0.2 ping statistics --- 00:41:27.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.765 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:27.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:27.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:41:27.765 00:41:27.765 --- 10.0.0.1 ping statistics --- 00:41:27.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.765 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1286816 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1286816 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1286816 ']' 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:27.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:27.765 03:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:27.765 [2024-12-16 03:02:57.752040] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:27.765 [2024-12-16 03:02:57.752949] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:27.765 [2024-12-16 03:02:57.752988] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:27.765 [2024-12-16 03:02:57.831262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:27.765 [2024-12-16 03:02:57.852995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:27.765 [2024-12-16 03:02:57.853033] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:27.766 [2024-12-16 03:02:57.853041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:27.766 [2024-12-16 03:02:57.853047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:27.766 [2024-12-16 03:02:57.853052] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:27.766 [2024-12-16 03:02:57.854113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:27.766 [2024-12-16 03:02:57.854113] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:27.766 [2024-12-16 03:02:57.916813] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:27.766 [2024-12-16 03:02:57.917429] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:27.766 [2024-12-16 03:02:57.917628] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:27.766 03:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:27.766 03:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:41:27.766 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:27.766 03:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:27.766 03:02:57 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:27.766 03:02:57 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:27.766 03:02:57 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:27.766 03:02:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:27.766 03:02:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:27.766 03:02:57 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:27.766 5000+0 records in 00:41:27.766 5000+0 records out 00:41:27.766 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0172715 s, 593 MB/s 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:27.766 AIO0 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:27.766 [2024-12-16 03:02:58.046793] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:27.766 [2024-12-16 03:02:58.087187] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1286816 0 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1286816 0 idle 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286816 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286816 -w 256 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286816 root 20 0 128.2g 46080 33792 R 6.7 0.0 0:00.23 reactor_0' 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286816 root 20 0 128.2g 46080 33792 R 6.7 0.0 0:00.23 reactor_0 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1286816 1 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1286816 1 idle 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286816 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286816 -w 256 00:41:27.766 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286859 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286859 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1286961 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1286816 0 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1286816 0 busy 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286816 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:28.025 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:28.026 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286816 -w 256 00:41:28.026 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:28.026 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286816 root 20 0 128.2g 46848 33792 R 0.0 0.1 0:00.23 reactor_0' 00:41:28.026 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286816 root 20 0 128.2g 46848 33792 R 0.0 0.1 0:00.23 reactor_0 00:41:28.026 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:28.026 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:28.026 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:28.026 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:28.026 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:28.026 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:28.026 03:02:58 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286816 -w 256 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286816 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:02.53 reactor_0' 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286816 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:02.53 reactor_0 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1286816 1 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1286816 1 busy 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286816 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286816 -w 256 00:41:29.404 03:02:59 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:29.404 03:03:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286859 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:01.33 reactor_1' 00:41:29.404 03:03:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286859 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:01.33 reactor_1 00:41:29.404 03:03:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:29.404 03:03:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:29.404 03:03:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:29.404 03:03:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:29.404 03:03:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:29.404 03:03:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:29.404 03:03:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:29.404 03:03:00 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:29.404 03:03:00 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1286961 00:41:39.381 Initializing NVMe Controllers 00:41:39.381 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:39.381 Controller IO queue size 256, less than required. 00:41:39.381 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:39.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:39.381 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:39.381 Initialization complete. Launching workers. 00:41:39.381 ======================================================== 00:41:39.381 Latency(us) 00:41:39.381 Device Information : IOPS MiB/s Average min max 00:41:39.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16587.80 64.80 15442.38 3057.53 56522.24 00:41:39.381 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 17162.60 67.04 14921.15 7576.40 25602.95 00:41:39.381 ======================================================== 00:41:39.381 Total : 33750.40 131.84 15177.33 3057.53 56522.24 00:41:39.381 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1286816 0 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1286816 0 idle 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286816 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286816 -w 256 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286816 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.21 reactor_0' 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286816 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.21 reactor_0 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:39.381 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1286816 1 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1286816 1 idle 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286816 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286816 -w 256 00:41:39.382 03:03:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286859 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1' 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286859 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:39.382 03:03:09 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1286816 0 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1286816 0 idle 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286816 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286816 -w 256 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286816 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.44 reactor_0' 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286816 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.44 reactor_0 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1286816 1 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1286816 1 idle 00:41:41.286 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1286816 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1286816 -w 256 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1286859 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.10 reactor_1' 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1286859 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.10 reactor_1 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:41.287 03:03:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:41.546 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:41.546 03:03:11 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:41.546 rmmod nvme_tcp 00:41:41.546 rmmod nvme_fabrics 00:41:41.546 rmmod nvme_keyring 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1286816 ']' 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1286816 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1286816 ']' 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1286816 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1286816 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1286816' 00:41:41.546 killing process with pid 1286816 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1286816 00:41:41.546 03:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1286816 00:41:41.805 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:41.805 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:41.805 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:41.805 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:41.805 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:41.805 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:41.805 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:41.805 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:41.805 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:41.805 03:03:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:41.805 03:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:41.805 03:03:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:44.339 03:03:14 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:44.339 00:41:44.339 real 0m22.768s 00:41:44.339 user 0m39.585s 00:41:44.339 sys 0m8.534s 00:41:44.339 03:03:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:44.339 03:03:14 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:44.339 ************************************ 00:41:44.339 END TEST nvmf_interrupt 00:41:44.339 ************************************ 00:41:44.339 00:41:44.339 real 35m24.617s 00:41:44.339 user 86m3.585s 00:41:44.339 sys 10m30.143s 00:41:44.339 03:03:14 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:44.339 03:03:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.339 ************************************ 00:41:44.339 END TEST nvmf_tcp 00:41:44.339 ************************************ 00:41:44.339 03:03:14 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:44.339 03:03:14 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:44.339 03:03:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:44.339 03:03:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:44.339 03:03:14 -- common/autotest_common.sh@10 -- # set +x 00:41:44.339 ************************************ 00:41:44.339 START TEST spdkcli_nvmf_tcp 00:41:44.339 ************************************ 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:44.339 * Looking for test storage... 00:41:44.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:44.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.339 --rc genhtml_branch_coverage=1 00:41:44.339 --rc genhtml_function_coverage=1 00:41:44.339 --rc genhtml_legend=1 00:41:44.339 --rc geninfo_all_blocks=1 00:41:44.339 --rc geninfo_unexecuted_blocks=1 00:41:44.339 00:41:44.339 ' 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:44.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.339 --rc genhtml_branch_coverage=1 00:41:44.339 --rc genhtml_function_coverage=1 00:41:44.339 --rc genhtml_legend=1 00:41:44.339 --rc geninfo_all_blocks=1 00:41:44.339 --rc geninfo_unexecuted_blocks=1 00:41:44.339 00:41:44.339 ' 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:44.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.339 --rc genhtml_branch_coverage=1 00:41:44.339 --rc genhtml_function_coverage=1 00:41:44.339 --rc genhtml_legend=1 00:41:44.339 --rc geninfo_all_blocks=1 00:41:44.339 --rc geninfo_unexecuted_blocks=1 00:41:44.339 00:41:44.339 ' 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:44.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:44.339 --rc genhtml_branch_coverage=1 00:41:44.339 --rc genhtml_function_coverage=1 00:41:44.339 --rc genhtml_legend=1 00:41:44.339 --rc geninfo_all_blocks=1 00:41:44.339 --rc geninfo_unexecuted_blocks=1 00:41:44.339 00:41:44.339 ' 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:44.339 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:44.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1290097 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1290097 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1290097 ']' 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:44.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.340 [2024-12-16 03:03:14.745454] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:44.340 [2024-12-16 03:03:14.745500] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290097 ] 00:41:44.340 [2024-12-16 03:03:14.818048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:44.340 [2024-12-16 03:03:14.841776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:44.340 [2024-12-16 03:03:14.841779] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.340 03:03:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:44.340 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:44.340 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:44.340 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:44.340 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:44.340 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:44.340 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:44.340 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:44.340 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:44.340 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:44.340 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:44.340 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:44.340 ' 00:41:47.628 [2024-12-16 03:03:17.658634] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:48.564 [2024-12-16 03:03:18.999145] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:51.097 [2024-12-16 03:03:21.478795] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:53.002 [2024-12-16 03:03:23.629486] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:54.907 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:54.907 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:54.907 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:54.907 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:54.907 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:54.907 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:54.907 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:54.907 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:54.907 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:54.907 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:54.907 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:54.907 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:54.907 03:03:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:54.907 03:03:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:54.907 03:03:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:54.907 03:03:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:54.907 03:03:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:54.907 03:03:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:54.907 03:03:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:54.907 03:03:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:55.474 03:03:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:55.474 03:03:25 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:55.474 03:03:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:55.474 03:03:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:55.474 03:03:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:55.474 03:03:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:55.474 03:03:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:55.474 03:03:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:55.474 03:03:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:55.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:55.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:55.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:55.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:55.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:55.474 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:55.474 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:55.474 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:55.474 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:55.474 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:55.474 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:55.474 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:55.474 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:55.474 ' 00:42:00.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:00.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:00.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:00.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:00.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:00.747 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:00.747 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:00.747 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:00.747 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:00.747 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:00.747 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:00.747 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:00.747 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:00.747 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1290097 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1290097 ']' 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1290097 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1290097 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1290097' 00:42:01.006 killing process with pid 1290097 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1290097 00:42:01.006 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1290097 00:42:01.265 03:03:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:42:01.266 03:03:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:42:01.266 03:03:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1290097 ']' 00:42:01.266 03:03:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1290097 00:42:01.266 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1290097 ']' 00:42:01.266 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1290097 00:42:01.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1290097) - No such process 00:42:01.266 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1290097 is not found' 00:42:01.266 Process with pid 1290097 is not found 00:42:01.266 03:03:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:42:01.266 03:03:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:42:01.266 03:03:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:42:01.266 00:42:01.266 real 0m17.275s 00:42:01.266 user 0m38.106s 00:42:01.266 sys 0m0.786s 00:42:01.266 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:01.266 03:03:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:01.266 ************************************ 00:42:01.266 END TEST spdkcli_nvmf_tcp 00:42:01.266 ************************************ 00:42:01.266 03:03:31 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:01.266 03:03:31 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:01.266 03:03:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:01.266 03:03:31 -- common/autotest_common.sh@10 -- # set +x 00:42:01.266 ************************************ 00:42:01.266 START TEST nvmf_identify_passthru 00:42:01.266 ************************************ 00:42:01.266 03:03:31 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:01.266 * Looking for test storage... 00:42:01.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:01.526 03:03:31 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:01.526 03:03:31 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:42:01.526 03:03:31 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:01.526 03:03:31 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:01.526 03:03:31 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:42:01.526 03:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:01.526 03:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:01.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:01.526 --rc genhtml_branch_coverage=1 00:42:01.526 --rc genhtml_function_coverage=1 00:42:01.526 --rc genhtml_legend=1 00:42:01.526 --rc geninfo_all_blocks=1 00:42:01.526 --rc geninfo_unexecuted_blocks=1 00:42:01.526 00:42:01.526 ' 00:42:01.526 03:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:01.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:01.526 --rc genhtml_branch_coverage=1 00:42:01.526 --rc genhtml_function_coverage=1 00:42:01.526 --rc genhtml_legend=1 00:42:01.526 --rc geninfo_all_blocks=1 00:42:01.526 --rc geninfo_unexecuted_blocks=1 00:42:01.526 00:42:01.526 ' 00:42:01.526 03:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:01.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:01.526 --rc genhtml_branch_coverage=1 00:42:01.526 --rc genhtml_function_coverage=1 00:42:01.526 --rc genhtml_legend=1 00:42:01.526 --rc geninfo_all_blocks=1 00:42:01.526 --rc geninfo_unexecuted_blocks=1 00:42:01.526 00:42:01.526 ' 00:42:01.526 03:03:32 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:01.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:01.526 --rc genhtml_branch_coverage=1 00:42:01.526 --rc genhtml_function_coverage=1 00:42:01.526 --rc genhtml_legend=1 00:42:01.526 --rc geninfo_all_blocks=1 00:42:01.526 --rc geninfo_unexecuted_blocks=1 00:42:01.526 00:42:01.526 ' 00:42:01.526 03:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:01.526 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:01.526 03:03:32 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:01.526 03:03:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.526 03:03:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.527 03:03:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.527 03:03:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:01.527 03:03:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:01.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:01.527 03:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:01.527 03:03:32 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:01.527 03:03:32 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:01.527 03:03:32 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:01.527 03:03:32 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:01.527 03:03:32 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.527 03:03:32 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.527 03:03:32 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.527 03:03:32 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:01.527 03:03:32 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:01.527 03:03:32 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:01.527 03:03:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:01.527 03:03:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:01.527 03:03:32 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:42:01.527 03:03:32 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:08.097 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:08.097 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:08.098 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:08.098 Found net devices under 0000:af:00.0: cvl_0_0 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:08.098 Found net devices under 0000:af:00.1: cvl_0_1 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:08.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:08.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:42:08.098 00:42:08.098 --- 10.0.0.2 ping statistics --- 00:42:08.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:08.098 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:08.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:08.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:42:08.098 00:42:08.098 --- 10.0.0.1 ping statistics --- 00:42:08.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:08.098 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:08.098 03:03:37 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:08.098 03:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:42:08.098 03:03:37 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:08.098 03:03:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:08.098 03:03:37 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:42:08.098 03:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:42:08.098 03:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:42:08.098 03:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:42:08.098 03:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:42:08.098 03:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:42:08.098 03:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:42:08.098 03:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:42:08.098 03:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:42:08.098 03:03:37 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:42:08.098 03:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:42:08.098 03:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:42:08.098 03:03:38 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:42:08.098 03:03:38 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:42:08.098 03:03:38 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:42:08.098 03:03:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:42:08.098 03:03:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:42:08.098 03:03:38 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:42:12.289 03:03:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:42:12.289 03:03:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:42:12.289 03:03:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:42:12.289 03:03:42 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:42:16.478 03:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:42:16.478 03:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:42:16.478 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:16.478 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.478 03:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:42:16.478 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:16.478 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.478 03:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1297191 00:42:16.478 03:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:16.478 03:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:16.478 03:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1297191 00:42:16.478 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1297191 ']' 00:42:16.478 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:16.478 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:16.478 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:16.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:16.478 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:16.478 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.478 [2024-12-16 03:03:46.410395] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:42:16.478 [2024-12-16 03:03:46.410440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:16.478 [2024-12-16 03:03:46.489081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:16.478 [2024-12-16 03:03:46.512675] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:16.478 [2024-12-16 03:03:46.512713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:16.478 [2024-12-16 03:03:46.512720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:16.478 [2024-12-16 03:03:46.512726] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:16.478 [2024-12-16 03:03:46.512730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:16.478 [2024-12-16 03:03:46.514069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:16.478 [2024-12-16 03:03:46.514104] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:42:16.478 [2024-12-16 03:03:46.514211] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:16.478 [2024-12-16 03:03:46.514212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:42:16.479 03:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.479 INFO: Log level set to 20 00:42:16.479 INFO: Requests: 00:42:16.479 { 00:42:16.479 "jsonrpc": "2.0", 00:42:16.479 "method": "nvmf_set_config", 00:42:16.479 "id": 1, 00:42:16.479 "params": { 00:42:16.479 "admin_cmd_passthru": { 00:42:16.479 "identify_ctrlr": true 00:42:16.479 } 00:42:16.479 } 00:42:16.479 } 00:42:16.479 00:42:16.479 INFO: response: 00:42:16.479 { 00:42:16.479 "jsonrpc": "2.0", 00:42:16.479 "id": 1, 00:42:16.479 "result": true 00:42:16.479 } 00:42:16.479 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.479 03:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.479 INFO: Setting log level to 20 00:42:16.479 INFO: Setting log level to 20 00:42:16.479 INFO: Log level set to 20 00:42:16.479 INFO: Log level set to 20 00:42:16.479 INFO: Requests: 00:42:16.479 { 00:42:16.479 "jsonrpc": "2.0", 00:42:16.479 "method": "framework_start_init", 00:42:16.479 "id": 1 00:42:16.479 } 00:42:16.479 00:42:16.479 INFO: Requests: 00:42:16.479 { 00:42:16.479 "jsonrpc": "2.0", 00:42:16.479 "method": "framework_start_init", 00:42:16.479 "id": 1 00:42:16.479 } 00:42:16.479 00:42:16.479 [2024-12-16 03:03:46.641826] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:42:16.479 INFO: response: 00:42:16.479 { 00:42:16.479 "jsonrpc": "2.0", 00:42:16.479 "id": 1, 00:42:16.479 "result": true 00:42:16.479 } 00:42:16.479 00:42:16.479 INFO: response: 00:42:16.479 { 00:42:16.479 "jsonrpc": "2.0", 00:42:16.479 "id": 1, 00:42:16.479 "result": true 00:42:16.479 } 00:42:16.479 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.479 03:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.479 INFO: Setting log level to 40 00:42:16.479 INFO: Setting log level to 40 00:42:16.479 INFO: Setting log level to 40 00:42:16.479 [2024-12-16 03:03:46.655129] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:16.479 03:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:16.479 03:03:46 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:16.479 03:03:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:19.017 Nvme0n1 00:42:19.017 03:03:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.017 03:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:42:19.017 03:03:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.017 03:03:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:19.017 03:03:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.017 03:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:42:19.017 03:03:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.017 03:03:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:19.017 03:03:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.017 03:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:19.017 03:03:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.017 03:03:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:19.017 [2024-12-16 03:03:49.562062] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:19.017 03:03:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.017 03:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:42:19.017 03:03:49 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.017 03:03:49 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:19.017 [ 00:42:19.017 { 00:42:19.017 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:42:19.017 "subtype": "Discovery", 00:42:19.017 "listen_addresses": [], 00:42:19.017 "allow_any_host": true, 00:42:19.017 "hosts": [] 00:42:19.017 }, 00:42:19.017 { 00:42:19.017 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:19.017 "subtype": "NVMe", 00:42:19.017 "listen_addresses": [ 00:42:19.017 { 00:42:19.017 "trtype": "TCP", 00:42:19.017 "adrfam": "IPv4", 00:42:19.017 "traddr": "10.0.0.2", 00:42:19.017 "trsvcid": "4420" 00:42:19.017 } 00:42:19.017 ], 00:42:19.017 "allow_any_host": true, 00:42:19.017 "hosts": [], 00:42:19.017 "serial_number": "SPDK00000000000001", 00:42:19.017 "model_number": "SPDK bdev Controller", 00:42:19.017 "max_namespaces": 1, 00:42:19.017 "min_cntlid": 1, 00:42:19.017 "max_cntlid": 65519, 00:42:19.017 "namespaces": [ 00:42:19.017 { 00:42:19.017 "nsid": 1, 00:42:19.017 "bdev_name": "Nvme0n1", 00:42:19.017 "name": "Nvme0n1", 00:42:19.017 "nguid": "2F0DF357F6F3421A8FDAC66129092BA7", 00:42:19.017 "uuid": "2f0df357-f6f3-421a-8fda-c66129092ba7" 00:42:19.017 } 00:42:19.017 ] 00:42:19.017 } 00:42:19.017 ] 00:42:19.017 03:03:49 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.017 03:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:19.017 03:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:42:19.017 03:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:42:19.277 03:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:42:19.277 03:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:19.277 03:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:42:19.277 03:03:49 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:42:19.537 03:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:42:19.537 03:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:42:19.537 03:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:42:19.537 03:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:19.537 03:03:50 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.537 03:03:50 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:19.537 03:03:50 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.537 03:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:42:19.537 03:03:50 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:42:19.537 03:03:50 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:19.537 03:03:50 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:42:19.537 03:03:50 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:19.537 03:03:50 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:42:19.537 03:03:50 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:19.537 03:03:50 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:19.537 rmmod nvme_tcp 00:42:19.537 rmmod nvme_fabrics 00:42:19.537 rmmod nvme_keyring 00:42:19.537 03:03:50 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:19.537 03:03:50 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:42:19.537 03:03:50 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:42:19.537 03:03:50 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1297191 ']' 00:42:19.537 03:03:50 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1297191 00:42:19.537 03:03:50 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1297191 ']' 00:42:19.537 03:03:50 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1297191 00:42:19.537 03:03:50 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:42:19.537 03:03:50 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:19.537 03:03:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1297191 00:42:19.870 03:03:50 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:19.870 03:03:50 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:19.870 03:03:50 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1297191' 00:42:19.870 killing process with pid 1297191 00:42:19.870 03:03:50 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1297191 00:42:19.870 03:03:50 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1297191 00:42:21.327 03:03:51 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:21.327 03:03:51 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:21.327 03:03:51 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:21.327 03:03:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:42:21.327 03:03:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:42:21.327 03:03:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:21.327 03:03:51 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:42:21.327 03:03:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:21.327 03:03:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:21.327 03:03:51 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:21.327 03:03:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:21.327 03:03:51 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:23.234 03:03:53 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:23.234 00:42:23.234 real 0m21.922s 00:42:23.234 user 0m28.092s 00:42:23.234 sys 0m5.377s 00:42:23.234 03:03:53 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:23.234 03:03:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:23.234 ************************************ 00:42:23.234 END TEST nvmf_identify_passthru 00:42:23.234 ************************************ 00:42:23.234 03:03:53 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:23.234 03:03:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:23.234 03:03:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:23.234 03:03:53 -- common/autotest_common.sh@10 -- # set +x 00:42:23.234 ************************************ 00:42:23.234 START TEST nvmf_dif 00:42:23.234 ************************************ 00:42:23.234 03:03:53 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:23.494 * Looking for test storage... 00:42:23.494 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:23.494 03:03:53 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:23.494 03:03:53 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:42:23.494 03:03:53 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:23.494 03:03:53 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:42:23.494 03:03:53 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:23.494 03:03:54 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:42:23.494 03:03:54 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:23.494 03:03:54 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:23.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.494 --rc genhtml_branch_coverage=1 00:42:23.494 --rc genhtml_function_coverage=1 00:42:23.494 --rc genhtml_legend=1 00:42:23.494 --rc geninfo_all_blocks=1 00:42:23.494 --rc geninfo_unexecuted_blocks=1 00:42:23.494 00:42:23.494 ' 00:42:23.494 03:03:54 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:23.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.494 --rc genhtml_branch_coverage=1 00:42:23.494 --rc genhtml_function_coverage=1 00:42:23.494 --rc genhtml_legend=1 00:42:23.494 --rc geninfo_all_blocks=1 00:42:23.494 --rc geninfo_unexecuted_blocks=1 00:42:23.494 00:42:23.494 ' 00:42:23.494 03:03:54 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:23.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.494 --rc genhtml_branch_coverage=1 00:42:23.494 --rc genhtml_function_coverage=1 00:42:23.494 --rc genhtml_legend=1 00:42:23.494 --rc geninfo_all_blocks=1 00:42:23.494 --rc geninfo_unexecuted_blocks=1 00:42:23.494 00:42:23.494 ' 00:42:23.494 03:03:54 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:23.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:23.495 --rc genhtml_branch_coverage=1 00:42:23.495 --rc genhtml_function_coverage=1 00:42:23.495 --rc genhtml_legend=1 00:42:23.495 --rc geninfo_all_blocks=1 00:42:23.495 --rc geninfo_unexecuted_blocks=1 00:42:23.495 00:42:23.495 ' 00:42:23.495 03:03:54 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:23.495 03:03:54 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:42:23.495 03:03:54 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:23.495 03:03:54 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:23.495 03:03:54 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:23.495 03:03:54 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.495 03:03:54 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.495 03:03:54 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.495 03:03:54 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:42:23.495 03:03:54 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:23.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:23.495 03:03:54 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:42:23.495 03:03:54 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:42:23.495 03:03:54 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:42:23.495 03:03:54 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:42:23.495 03:03:54 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:23.495 03:03:54 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:23.495 03:03:54 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:23.495 03:03:54 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:42:23.495 03:03:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:30.068 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:30.068 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:30.068 Found net devices under 0000:af:00.0: cvl_0_0 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:30.068 Found net devices under 0000:af:00.1: cvl_0_1 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:30.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:30.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:42:30.068 00:42:30.068 --- 10.0.0.2 ping statistics --- 00:42:30.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:30.068 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:30.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:30.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:42:30.068 00:42:30.068 --- 10.0.0.1 ping statistics --- 00:42:30.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:30.068 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:30.068 03:03:59 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:31.974 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:42:31.974 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:31.974 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:42:31.974 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:42:31.974 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:42:31.974 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:42:31.974 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:42:31.974 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:42:31.974 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:42:31.974 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:42:31.974 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:42:31.974 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:42:31.975 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:42:31.975 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:42:31.975 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:42:31.975 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:42:31.975 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:42:32.234 03:04:02 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:32.234 03:04:02 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:32.234 03:04:02 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:32.234 03:04:02 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:32.234 03:04:02 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:32.234 03:04:02 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:32.234 03:04:02 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:32.234 03:04:02 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:32.234 03:04:02 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:32.234 03:04:02 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:32.234 03:04:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:32.234 03:04:02 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1302579 00:42:32.234 03:04:02 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1302579 00:42:32.234 03:04:02 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:32.234 03:04:02 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1302579 ']' 00:42:32.234 03:04:02 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:32.234 03:04:02 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:32.234 03:04:02 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:32.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:32.234 03:04:02 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:32.234 03:04:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:32.234 [2024-12-16 03:04:02.806549] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:42:32.234 [2024-12-16 03:04:02.806593] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:32.234 [2024-12-16 03:04:02.886181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:32.494 [2024-12-16 03:04:02.907770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:32.494 [2024-12-16 03:04:02.907806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:32.494 [2024-12-16 03:04:02.907814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:32.494 [2024-12-16 03:04:02.907820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:32.494 [2024-12-16 03:04:02.907826] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:32.494 [2024-12-16 03:04:02.908353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:32.494 03:04:03 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:32.494 03:04:03 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:32.494 03:04:03 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:32.494 03:04:03 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:32.494 03:04:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:32.494 03:04:03 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:32.494 03:04:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:32.494 03:04:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:32.494 03:04:03 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.494 03:04:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:32.494 [2024-12-16 03:04:03.047597] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:32.494 03:04:03 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.494 03:04:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:32.494 03:04:03 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:32.494 03:04:03 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:32.494 03:04:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:32.494 ************************************ 00:42:32.494 START TEST fio_dif_1_default 00:42:32.494 ************************************ 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:32.494 bdev_null0 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:32.494 [2024-12-16 03:04:03.123929] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:32.494 03:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:32.494 { 00:42:32.494 "params": { 00:42:32.494 "name": "Nvme$subsystem", 00:42:32.494 "trtype": "$TEST_TRANSPORT", 00:42:32.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:32.494 "adrfam": "ipv4", 00:42:32.494 "trsvcid": "$NVMF_PORT", 00:42:32.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:32.495 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:32.495 "hdgst": ${hdgst:-false}, 00:42:32.495 "ddgst": ${ddgst:-false} 00:42:32.495 }, 00:42:32.495 "method": "bdev_nvme_attach_controller" 00:42:32.495 } 00:42:32.495 EOF 00:42:32.495 )") 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:32.495 03:04:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:32.495 "params": { 00:42:32.495 "name": "Nvme0", 00:42:32.495 "trtype": "tcp", 00:42:32.495 "traddr": "10.0.0.2", 00:42:32.495 "adrfam": "ipv4", 00:42:32.495 "trsvcid": "4420", 00:42:32.495 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:32.495 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:32.495 "hdgst": false, 00:42:32.495 "ddgst": false 00:42:32.495 }, 00:42:32.495 "method": "bdev_nvme_attach_controller" 00:42:32.495 }' 00:42:32.776 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:32.776 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:32.776 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:32.776 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:32.776 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:32.776 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:32.776 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:32.776 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:32.776 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:32.776 03:04:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:33.039 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:33.039 fio-3.35 00:42:33.039 Starting 1 thread 00:42:45.253 00:42:45.253 filename0: (groupid=0, jobs=1): err= 0: pid=1302931: Mon Dec 16 03:04:14 2024 00:42:45.253 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10015msec) 00:42:45.253 slat (nsec): min=5920, max=24377, avg=6201.72, stdev=1021.56 00:42:45.253 clat (usec): min=40804, max=45572, avg=41026.40, stdev=332.72 00:42:45.253 lat (usec): min=40810, max=45596, avg=41032.61, stdev=333.10 00:42:45.253 clat percentiles (usec): 00:42:45.253 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:45.253 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:45.253 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:45.253 | 99.00th=[42206], 99.50th=[42206], 99.90th=[45351], 99.95th=[45351], 00:42:45.253 | 99.99th=[45351] 00:42:45.253 bw ( KiB/s): min= 384, max= 416, per=99.53%, avg=388.80, stdev=11.72, samples=20 00:42:45.253 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:42:45.253 lat (msec) : 50=100.00% 00:42:45.253 cpu : usr=92.25%, sys=7.51%, ctx=14, majf=0, minf=0 00:42:45.253 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:45.253 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.253 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:45.253 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:45.253 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:45.253 00:42:45.253 Run status group 0 (all jobs): 00:42:45.253 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10015-10015msec 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.253 00:42:45.253 real 0m11.222s 00:42:45.253 user 0m16.202s 00:42:45.253 sys 0m1.056s 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:45.253 ************************************ 00:42:45.253 END TEST fio_dif_1_default 00:42:45.253 ************************************ 00:42:45.253 03:04:14 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:45.253 03:04:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:45.253 03:04:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:45.253 03:04:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:45.253 ************************************ 00:42:45.253 START TEST fio_dif_1_multi_subsystems 00:42:45.253 ************************************ 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:45.253 bdev_null0 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:45.253 [2024-12-16 03:04:14.413529] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:45.253 bdev_null1 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:45.253 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:45.253 { 00:42:45.253 "params": { 00:42:45.253 "name": "Nvme$subsystem", 00:42:45.253 "trtype": "$TEST_TRANSPORT", 00:42:45.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:45.253 "adrfam": "ipv4", 00:42:45.253 "trsvcid": "$NVMF_PORT", 00:42:45.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:45.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:45.253 "hdgst": ${hdgst:-false}, 00:42:45.253 "ddgst": ${ddgst:-false} 00:42:45.253 }, 00:42:45.253 "method": "bdev_nvme_attach_controller" 00:42:45.253 } 00:42:45.253 EOF 00:42:45.253 )") 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:45.254 { 00:42:45.254 "params": { 00:42:45.254 "name": "Nvme$subsystem", 00:42:45.254 "trtype": "$TEST_TRANSPORT", 00:42:45.254 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:45.254 "adrfam": "ipv4", 00:42:45.254 "trsvcid": "$NVMF_PORT", 00:42:45.254 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:45.254 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:45.254 "hdgst": ${hdgst:-false}, 00:42:45.254 "ddgst": ${ddgst:-false} 00:42:45.254 }, 00:42:45.254 "method": "bdev_nvme_attach_controller" 00:42:45.254 } 00:42:45.254 EOF 00:42:45.254 )") 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:45.254 "params": { 00:42:45.254 "name": "Nvme0", 00:42:45.254 "trtype": "tcp", 00:42:45.254 "traddr": "10.0.0.2", 00:42:45.254 "adrfam": "ipv4", 00:42:45.254 "trsvcid": "4420", 00:42:45.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:45.254 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:45.254 "hdgst": false, 00:42:45.254 "ddgst": false 00:42:45.254 }, 00:42:45.254 "method": "bdev_nvme_attach_controller" 00:42:45.254 },{ 00:42:45.254 "params": { 00:42:45.254 "name": "Nvme1", 00:42:45.254 "trtype": "tcp", 00:42:45.254 "traddr": "10.0.0.2", 00:42:45.254 "adrfam": "ipv4", 00:42:45.254 "trsvcid": "4420", 00:42:45.254 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:45.254 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:45.254 "hdgst": false, 00:42:45.254 "ddgst": false 00:42:45.254 }, 00:42:45.254 "method": "bdev_nvme_attach_controller" 00:42:45.254 }' 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:45.254 03:04:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:45.254 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:45.254 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:45.254 fio-3.35 00:42:45.254 Starting 2 threads 00:42:55.238 00:42:55.239 filename0: (groupid=0, jobs=1): err= 0: pid=1304850: Mon Dec 16 03:04:25 2024 00:42:55.239 read: IOPS=199, BW=797KiB/s (816kB/s)(7984KiB/10020msec) 00:42:55.239 slat (nsec): min=5988, max=25704, avg=7030.79, stdev=1817.57 00:42:55.239 clat (usec): min=386, max=42582, avg=20058.62, stdev=20382.44 00:42:55.239 lat (usec): min=392, max=42588, avg=20065.65, stdev=20381.91 00:42:55.239 clat percentiles (usec): 00:42:55.239 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 486], 00:42:55.239 | 30.00th=[ 594], 40.00th=[ 611], 50.00th=[ 627], 60.00th=[40633], 00:42:55.239 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:42:55.239 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:55.239 | 99.99th=[42730] 00:42:55.239 bw ( KiB/s): min= 704, max= 960, per=67.00%, avg=796.80, stdev=56.77, samples=20 00:42:55.239 iops : min= 176, max= 240, avg=199.20, stdev=14.19, samples=20 00:42:55.239 lat (usec) : 500=25.00%, 750=27.10% 00:42:55.239 lat (msec) : 50=47.90% 00:42:55.239 cpu : usr=96.97%, sys=2.78%, ctx=15, majf=0, minf=129 00:42:55.239 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:55.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:55.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:55.239 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:55.239 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:55.239 filename1: (groupid=0, jobs=1): err= 0: pid=1304851: Mon Dec 16 03:04:25 2024 00:42:55.239 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10008msec) 00:42:55.239 slat (nsec): min=6001, max=26497, avg=7683.82, stdev=2418.19 00:42:55.239 clat (usec): min=385, max=43131, avg=40823.10, stdev=2594.41 00:42:55.239 lat (usec): min=391, max=43158, avg=40830.79, stdev=2594.46 00:42:55.239 clat percentiles (usec): 00:42:55.239 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:42:55.239 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:42:55.239 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:42:55.239 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:42:55.239 | 99.99th=[43254] 00:42:55.239 bw ( KiB/s): min= 384, max= 416, per=32.83%, avg=390.40, stdev=13.13, samples=20 00:42:55.239 iops : min= 96, max= 104, avg=97.60, stdev= 3.28, samples=20 00:42:55.239 lat (usec) : 500=0.41% 00:42:55.239 lat (msec) : 50=99.59% 00:42:55.239 cpu : usr=96.32%, sys=3.44%, ctx=13, majf=0, minf=110 00:42:55.239 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:55.239 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:55.239 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:55.239 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:55.239 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:55.239 00:42:55.239 Run status group 0 (all jobs): 00:42:55.239 READ: bw=1188KiB/s (1217kB/s), 392KiB/s-797KiB/s (401kB/s-816kB/s), io=11.6MiB (12.2MB), run=10008-10020msec 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.239 00:42:55.239 real 0m11.369s 00:42:55.239 user 0m26.684s 00:42:55.239 sys 0m0.931s 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:55.239 03:04:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.239 ************************************ 00:42:55.239 END TEST fio_dif_1_multi_subsystems 00:42:55.239 ************************************ 00:42:55.239 03:04:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:55.239 03:04:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:55.239 03:04:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:55.239 03:04:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:55.239 ************************************ 00:42:55.239 START TEST fio_dif_rand_params 00:42:55.239 ************************************ 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:55.239 bdev_null0 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:55.239 [2024-12-16 03:04:25.853973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:55.239 03:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:55.240 { 00:42:55.240 "params": { 00:42:55.240 "name": "Nvme$subsystem", 00:42:55.240 "trtype": "$TEST_TRANSPORT", 00:42:55.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:55.240 "adrfam": "ipv4", 00:42:55.240 "trsvcid": "$NVMF_PORT", 00:42:55.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:55.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:55.240 "hdgst": ${hdgst:-false}, 00:42:55.240 "ddgst": ${ddgst:-false} 00:42:55.240 }, 00:42:55.240 "method": "bdev_nvme_attach_controller" 00:42:55.240 } 00:42:55.240 EOF 00:42:55.240 )") 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:55.240 03:04:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:55.240 "params": { 00:42:55.240 "name": "Nvme0", 00:42:55.240 "trtype": "tcp", 00:42:55.240 "traddr": "10.0.0.2", 00:42:55.240 "adrfam": "ipv4", 00:42:55.240 "trsvcid": "4420", 00:42:55.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:55.240 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:55.240 "hdgst": false, 00:42:55.240 "ddgst": false 00:42:55.240 }, 00:42:55.240 "method": "bdev_nvme_attach_controller" 00:42:55.240 }' 00:42:55.524 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:55.524 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:55.524 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:55.524 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:55.524 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:55.524 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:55.524 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:55.524 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:55.524 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:55.524 03:04:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:55.789 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:55.789 ... 00:42:55.789 fio-3.35 00:42:55.789 Starting 3 threads 00:43:02.356 00:43:02.356 filename0: (groupid=0, jobs=1): err= 0: pid=1306759: Mon Dec 16 03:04:31 2024 00:43:02.356 read: IOPS=331, BW=41.4MiB/s (43.4MB/s)(209MiB/5044msec) 00:43:02.356 slat (nsec): min=6293, max=53867, avg=16494.34, stdev=8526.84 00:43:02.356 clat (usec): min=5075, max=51316, avg=9011.60, stdev=4852.58 00:43:02.356 lat (usec): min=5086, max=51328, avg=9028.09, stdev=4852.32 00:43:02.356 clat percentiles (usec): 00:43:02.356 | 1.00th=[ 5669], 5.00th=[ 6456], 10.00th=[ 7046], 20.00th=[ 7570], 00:43:02.356 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8717], 00:43:02.356 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9896], 95.00th=[10552], 00:43:02.356 | 99.00th=[48497], 99.50th=[49546], 99.90th=[50594], 99.95th=[51119], 00:43:02.356 | 99.99th=[51119] 00:43:02.356 bw ( KiB/s): min=31488, max=48384, per=35.73%, avg=42726.40, stdev=4989.56, samples=10 00:43:02.356 iops : min= 246, max= 378, avg=333.80, stdev=38.98, samples=10 00:43:02.356 lat (msec) : 10=91.14%, 20=7.48%, 50=1.08%, 100=0.30% 00:43:02.356 cpu : usr=95.58%, sys=4.10%, ctx=20, majf=0, minf=83 00:43:02.356 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.356 issued rwts: total=1671,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.356 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:02.356 filename0: (groupid=0, jobs=1): err= 0: pid=1306760: Mon Dec 16 03:04:31 2024 00:43:02.356 read: IOPS=309, BW=38.7MiB/s (40.6MB/s)(196MiB/5046msec) 00:43:02.356 slat (nsec): min=6341, max=41983, avg=13219.39, stdev=5090.91 00:43:02.356 clat (usec): min=3212, max=51626, avg=9656.39, stdev=5297.31 00:43:02.356 lat (usec): min=3222, max=51638, avg=9669.61, stdev=5297.93 00:43:02.356 clat percentiles (usec): 00:43:02.356 | 1.00th=[ 3556], 5.00th=[ 3949], 10.00th=[ 6783], 20.00th=[ 8160], 00:43:02.356 | 30.00th=[ 8586], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:43:02.356 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11207], 95.00th=[11731], 00:43:02.356 | 99.00th=[48497], 99.50th=[50070], 99.90th=[51119], 99.95th=[51643], 00:43:02.356 | 99.99th=[51643] 00:43:02.356 bw ( KiB/s): min=32256, max=53760, per=33.42%, avg=39961.60, stdev=5895.35, samples=10 00:43:02.356 iops : min= 252, max= 420, avg=312.20, stdev=46.06, samples=10 00:43:02.356 lat (msec) : 4=5.05%, 10=65.03%, 20=28.39%, 50=0.96%, 100=0.58% 00:43:02.356 cpu : usr=95.98%, sys=3.71%, ctx=8, majf=0, minf=63 00:43:02.356 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.356 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.356 issued rwts: total=1564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.356 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:02.356 filename0: (groupid=0, jobs=1): err= 0: pid=1306761: Mon Dec 16 03:04:31 2024 00:43:02.356 read: IOPS=293, BW=36.7MiB/s (38.4MB/s)(185MiB/5047msec) 00:43:02.356 slat (nsec): min=6335, max=30951, avg=13357.65, stdev=4734.86 00:43:02.356 clat (usec): min=4978, max=89847, avg=10183.93, stdev=4764.46 00:43:02.357 lat (usec): min=4985, max=89860, avg=10197.28, stdev=4764.32 00:43:02.357 clat percentiles (usec): 00:43:02.357 | 1.00th=[ 5800], 5.00th=[ 6521], 10.00th=[ 7242], 20.00th=[ 8586], 00:43:02.357 | 30.00th=[ 8979], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10290], 00:43:02.357 | 70.00th=[10683], 80.00th=[11207], 90.00th=[11731], 95.00th=[12256], 00:43:02.357 | 99.00th=[47973], 99.50th=[49546], 99.90th=[51119], 99.95th=[89654], 00:43:02.357 | 99.99th=[89654] 00:43:02.357 bw ( KiB/s): min=23296, max=43008, per=31.64%, avg=37836.80, stdev=5574.56, samples=10 00:43:02.357 iops : min= 182, max= 336, avg=295.60, stdev=43.55, samples=10 00:43:02.357 lat (msec) : 10=53.11%, 20=45.81%, 50=0.68%, 100=0.41% 00:43:02.357 cpu : usr=96.31%, sys=3.39%, ctx=8, majf=0, minf=26 00:43:02.357 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:02.357 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.357 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:02.357 issued rwts: total=1480,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:02.357 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:02.357 00:43:02.357 Run status group 0 (all jobs): 00:43:02.357 READ: bw=117MiB/s (122MB/s), 36.7MiB/s-41.4MiB/s (38.4MB/s-43.4MB/s), io=589MiB (618MB), run=5044-5047msec 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 bdev_null0 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 [2024-12-16 03:04:31.983109] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 bdev_null1 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 bdev_null2 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:02.357 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:02.357 { 00:43:02.357 "params": { 00:43:02.357 "name": "Nvme$subsystem", 00:43:02.357 "trtype": "$TEST_TRANSPORT", 00:43:02.357 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:02.357 "adrfam": "ipv4", 00:43:02.357 "trsvcid": "$NVMF_PORT", 00:43:02.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:02.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:02.357 "hdgst": ${hdgst:-false}, 00:43:02.357 "ddgst": ${ddgst:-false} 00:43:02.357 }, 00:43:02.357 "method": "bdev_nvme_attach_controller" 00:43:02.357 } 00:43:02.357 EOF 00:43:02.358 )") 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:02.358 { 00:43:02.358 "params": { 00:43:02.358 "name": "Nvme$subsystem", 00:43:02.358 "trtype": "$TEST_TRANSPORT", 00:43:02.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:02.358 "adrfam": "ipv4", 00:43:02.358 "trsvcid": "$NVMF_PORT", 00:43:02.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:02.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:02.358 "hdgst": ${hdgst:-false}, 00:43:02.358 "ddgst": ${ddgst:-false} 00:43:02.358 }, 00:43:02.358 "method": "bdev_nvme_attach_controller" 00:43:02.358 } 00:43:02.358 EOF 00:43:02.358 )") 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:02.358 { 00:43:02.358 "params": { 00:43:02.358 "name": "Nvme$subsystem", 00:43:02.358 "trtype": "$TEST_TRANSPORT", 00:43:02.358 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:02.358 "adrfam": "ipv4", 00:43:02.358 "trsvcid": "$NVMF_PORT", 00:43:02.358 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:02.358 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:02.358 "hdgst": ${hdgst:-false}, 00:43:02.358 "ddgst": ${ddgst:-false} 00:43:02.358 }, 00:43:02.358 "method": "bdev_nvme_attach_controller" 00:43:02.358 } 00:43:02.358 EOF 00:43:02.358 )") 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:02.358 "params": { 00:43:02.358 "name": "Nvme0", 00:43:02.358 "trtype": "tcp", 00:43:02.358 "traddr": "10.0.0.2", 00:43:02.358 "adrfam": "ipv4", 00:43:02.358 "trsvcid": "4420", 00:43:02.358 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:02.358 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:02.358 "hdgst": false, 00:43:02.358 "ddgst": false 00:43:02.358 }, 00:43:02.358 "method": "bdev_nvme_attach_controller" 00:43:02.358 },{ 00:43:02.358 "params": { 00:43:02.358 "name": "Nvme1", 00:43:02.358 "trtype": "tcp", 00:43:02.358 "traddr": "10.0.0.2", 00:43:02.358 "adrfam": "ipv4", 00:43:02.358 "trsvcid": "4420", 00:43:02.358 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:02.358 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:02.358 "hdgst": false, 00:43:02.358 "ddgst": false 00:43:02.358 }, 00:43:02.358 "method": "bdev_nvme_attach_controller" 00:43:02.358 },{ 00:43:02.358 "params": { 00:43:02.358 "name": "Nvme2", 00:43:02.358 "trtype": "tcp", 00:43:02.358 "traddr": "10.0.0.2", 00:43:02.358 "adrfam": "ipv4", 00:43:02.358 "trsvcid": "4420", 00:43:02.358 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:02.358 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:02.358 "hdgst": false, 00:43:02.358 "ddgst": false 00:43:02.358 }, 00:43:02.358 "method": "bdev_nvme_attach_controller" 00:43:02.358 }' 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:02.358 03:04:32 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:02.358 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:02.358 ... 00:43:02.358 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:02.358 ... 00:43:02.358 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:02.358 ... 00:43:02.358 fio-3.35 00:43:02.358 Starting 24 threads 00:43:14.569 00:43:14.569 filename0: (groupid=0, jobs=1): err= 0: pid=1307815: Mon Dec 16 03:04:43 2024 00:43:14.569 read: IOPS=540, BW=2161KiB/s (2213kB/s)(21.1MiB/10009msec) 00:43:14.569 slat (nsec): min=7666, max=55814, avg=18056.88, stdev=6647.26 00:43:14.569 clat (usec): min=1169, max=33269, avg=29465.07, stdev=5100.82 00:43:14.569 lat (usec): min=1185, max=33293, avg=29483.13, stdev=5101.70 00:43:14.569 clat percentiles (usec): 00:43:14.569 | 1.00th=[ 1549], 5.00th=[29754], 10.00th=[30016], 20.00th=[30278], 00:43:14.569 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:43:14.569 | 70.00th=[30540], 80.00th=[30802], 90.00th=[31065], 95.00th=[31065], 00:43:14.569 | 99.00th=[31589], 99.50th=[32113], 99.90th=[33162], 99.95th=[33162], 00:43:14.569 | 99.99th=[33162] 00:43:14.569 bw ( KiB/s): min= 2048, max= 3456, per=4.35%, avg=2156.80, stdev=311.53, samples=20 00:43:14.569 iops : min= 512, max= 864, avg=539.20, stdev=77.88, samples=20 00:43:14.569 lat (msec) : 2=2.37%, 4=0.59%, 10=0.13%, 20=1.05%, 50=95.86% 00:43:14.569 cpu : usr=98.10%, sys=1.52%, ctx=20, majf=0, minf=9 00:43:14.569 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.5%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:14.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.569 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.569 issued rwts: total=5408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.569 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.569 filename0: (groupid=0, jobs=1): err= 0: pid=1307816: Mon Dec 16 03:04:43 2024 00:43:14.569 read: IOPS=513, BW=2055KiB/s (2104kB/s)(20.3MiB/10124msec) 00:43:14.569 slat (nsec): min=4295, max=49699, avg=14727.48, stdev=6798.48 00:43:14.569 clat (msec): min=29, max=173, avg=31.03, stdev= 8.04 00:43:14.569 lat (msec): min=29, max=173, avg=31.04, stdev= 8.04 00:43:14.569 clat percentiles (msec): 00:43:14.569 | 1.00th=[ 31], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.569 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.569 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:43:14.569 | 99.00th=[ 32], 99.50th=[ 57], 99.90th=[ 174], 99.95th=[ 174], 00:43:14.569 | 99.99th=[ 174] 00:43:14.569 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2073.60, stdev=78.80, samples=20 00:43:14.569 iops : min= 480, max= 544, avg=518.40, stdev=19.70, samples=20 00:43:14.569 lat (msec) : 50=99.38%, 100=0.31%, 250=0.31% 00:43:14.569 cpu : usr=98.69%, sys=0.93%, ctx=13, majf=0, minf=9 00:43:14.569 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:14.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.569 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.569 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.569 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.569 filename0: (groupid=0, jobs=1): err= 0: pid=1307817: Mon Dec 16 03:04:43 2024 00:43:14.569 read: IOPS=513, BW=2055KiB/s (2104kB/s)(20.3MiB/10122msec) 00:43:14.569 slat (nsec): min=7668, max=84488, avg=29949.10, stdev=15888.33 00:43:14.569 clat (msec): min=26, max=173, avg=30.88, stdev= 8.06 00:43:14.569 lat (msec): min=26, max=173, avg=30.91, stdev= 8.06 00:43:14.569 clat percentiles (msec): 00:43:14.569 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.569 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.569 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:14.569 | 99.00th=[ 32], 99.50th=[ 56], 99.90th=[ 174], 99.95th=[ 174], 00:43:14.569 | 99.99th=[ 174] 00:43:14.569 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2073.60, stdev=78.80, samples=20 00:43:14.569 iops : min= 480, max= 544, avg=518.40, stdev=19.70, samples=20 00:43:14.569 lat (msec) : 50=99.38%, 100=0.31%, 250=0.31% 00:43:14.569 cpu : usr=98.49%, sys=1.14%, ctx=13, majf=0, minf=9 00:43:14.569 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:14.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.569 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.569 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.569 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.569 filename0: (groupid=0, jobs=1): err= 0: pid=1307818: Mon Dec 16 03:04:43 2024 00:43:14.569 read: IOPS=514, BW=2058KiB/s (2107kB/s)(20.4MiB/10136msec) 00:43:14.569 slat (nsec): min=4683, max=90081, avg=28395.20, stdev=16121.93 00:43:14.569 clat (msec): min=17, max=171, avg=30.84, stdev= 7.94 00:43:14.569 lat (msec): min=17, max=171, avg=30.87, stdev= 7.94 00:43:14.569 clat percentiles (msec): 00:43:14.569 | 1.00th=[ 30], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.569 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.569 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:14.569 | 99.00th=[ 35], 99.50th=[ 45], 99.90th=[ 171], 99.95th=[ 171], 00:43:14.569 | 99.99th=[ 171] 00:43:14.569 bw ( KiB/s): min= 2048, max= 2176, per=4.19%, avg=2079.20, stdev=52.49, samples=20 00:43:14.569 iops : min= 512, max= 544, avg=519.80, stdev=13.12, samples=20 00:43:14.569 lat (msec) : 20=0.31%, 50=99.35%, 100=0.04%, 250=0.31% 00:43:14.569 cpu : usr=98.51%, sys=1.11%, ctx=14, majf=0, minf=9 00:43:14.569 IO depths : 1=5.9%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:14.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.569 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.569 issued rwts: total=5214,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.569 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.569 filename0: (groupid=0, jobs=1): err= 0: pid=1307819: Mon Dec 16 03:04:43 2024 00:43:14.569 read: IOPS=515, BW=2062KiB/s (2112kB/s)(20.4MiB/10147msec) 00:43:14.569 slat (nsec): min=7581, max=60618, avg=27928.93, stdev=8914.11 00:43:14.569 clat (msec): min=15, max=171, avg=30.77, stdev= 7.82 00:43:14.569 lat (msec): min=15, max=171, avg=30.80, stdev= 7.82 00:43:14.569 clat percentiles (msec): 00:43:14.569 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.569 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.569 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:14.569 | 99.00th=[ 32], 99.50th=[ 34], 99.90th=[ 171], 99.95th=[ 171], 00:43:14.569 | 99.99th=[ 171] 00:43:14.569 bw ( KiB/s): min= 2048, max= 2176, per=4.21%, avg=2086.40, stdev=60.18, samples=20 00:43:14.569 iops : min= 512, max= 544, avg=521.60, stdev=15.05, samples=20 00:43:14.569 lat (msec) : 20=0.31%, 50=99.39%, 250=0.31% 00:43:14.569 cpu : usr=98.56%, sys=1.05%, ctx=14, majf=0, minf=9 00:43:14.569 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:14.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.569 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.569 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.569 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.569 filename0: (groupid=0, jobs=1): err= 0: pid=1307820: Mon Dec 16 03:04:43 2024 00:43:14.569 read: IOPS=530, BW=2123KiB/s (2174kB/s)(21.0MiB/10123msec) 00:43:14.569 slat (nsec): min=4746, max=78620, avg=20412.39, stdev=12176.52 00:43:14.569 clat (msec): min=10, max=173, avg=29.88, stdev= 7.59 00:43:14.569 lat (msec): min=10, max=173, avg=29.90, stdev= 7.59 00:43:14.569 clat percentiles (msec): 00:43:14.569 | 1.00th=[ 18], 5.00th=[ 21], 10.00th=[ 25], 20.00th=[ 31], 00:43:14.569 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.569 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 34], 00:43:14.569 | 99.00th=[ 44], 99.50th=[ 56], 99.90th=[ 171], 99.95th=[ 174], 00:43:14.569 | 99.99th=[ 174] 00:43:14.569 bw ( KiB/s): min= 1923, max= 2496, per=4.32%, avg=2143.35, stdev=127.88, samples=20 00:43:14.569 iops : min= 480, max= 624, avg=535.80, stdev=32.04, samples=20 00:43:14.569 lat (msec) : 20=3.35%, 50=96.06%, 100=0.41%, 250=0.19% 00:43:14.569 cpu : usr=98.71%, sys=0.92%, ctx=14, majf=0, minf=9 00:43:14.569 IO depths : 1=2.9%, 2=6.0%, 4=13.5%, 8=66.0%, 16=11.5%, 32=0.0%, >=64=0.0% 00:43:14.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.569 complete : 0=0.0%, 4=91.4%, 8=4.8%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.569 issued rwts: total=5374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.570 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.570 filename0: (groupid=0, jobs=1): err= 0: pid=1307821: Mon Dec 16 03:04:43 2024 00:43:14.570 read: IOPS=523, BW=2094KiB/s (2144kB/s)(20.6MiB/10055msec) 00:43:14.570 slat (usec): min=7, max=106, avg=23.29, stdev=16.26 00:43:14.570 clat (usec): min=12406, max=83823, avg=30350.02, stdev=3488.37 00:43:14.570 lat (usec): min=12419, max=83857, avg=30373.31, stdev=3489.06 00:43:14.570 clat percentiles (usec): 00:43:14.570 | 1.00th=[15533], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:43:14.570 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:43:14.570 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:43:14.570 | 99.00th=[31851], 99.50th=[32900], 99.90th=[83362], 99.95th=[83362], 00:43:14.570 | 99.99th=[83362] 00:43:14.570 bw ( KiB/s): min= 2048, max= 2304, per=4.23%, avg=2099.20, stdev=76.58, samples=20 00:43:14.570 iops : min= 512, max= 576, avg=524.80, stdev=19.14, samples=20 00:43:14.570 lat (msec) : 20=1.25%, 50=98.44%, 100=0.30% 00:43:14.570 cpu : usr=98.61%, sys=1.01%, ctx=13, majf=0, minf=9 00:43:14.570 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:14.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.570 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.570 filename0: (groupid=0, jobs=1): err= 0: pid=1307822: Mon Dec 16 03:04:43 2024 00:43:14.570 read: IOPS=517, BW=2072KiB/s (2122kB/s)(20.6MiB/10163msec) 00:43:14.570 slat (nsec): min=9956, max=64555, avg=26127.38, stdev=8607.84 00:43:14.570 clat (msec): min=10, max=171, avg=30.68, stdev= 7.94 00:43:14.570 lat (msec): min=10, max=171, avg=30.70, stdev= 7.94 00:43:14.570 clat percentiles (msec): 00:43:14.570 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.570 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.570 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:14.570 | 99.00th=[ 32], 99.50th=[ 34], 99.90th=[ 171], 99.95th=[ 171], 00:43:14.570 | 99.99th=[ 171] 00:43:14.570 bw ( KiB/s): min= 2048, max= 2304, per=4.23%, avg=2099.20, stdev=76.58, samples=20 00:43:14.570 iops : min= 512, max= 576, avg=524.80, stdev=19.14, samples=20 00:43:14.570 lat (msec) : 20=1.18%, 50=98.52%, 250=0.30% 00:43:14.570 cpu : usr=98.37%, sys=1.25%, ctx=16, majf=0, minf=9 00:43:14.570 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:14.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.570 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.570 filename1: (groupid=0, jobs=1): err= 0: pid=1307823: Mon Dec 16 03:04:43 2024 00:43:14.570 read: IOPS=516, BW=2065KiB/s (2114kB/s)(20.4MiB/10139msec) 00:43:14.570 slat (nsec): min=7581, max=68187, avg=26225.44, stdev=10035.83 00:43:14.570 clat (msec): min=19, max=174, avg=30.73, stdev= 7.92 00:43:14.570 lat (msec): min=19, max=174, avg=30.76, stdev= 7.92 00:43:14.570 clat percentiles (msec): 00:43:14.570 | 1.00th=[ 24], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.570 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.570 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:14.570 | 99.00th=[ 34], 99.50th=[ 41], 99.90th=[ 171], 99.95th=[ 171], 00:43:14.570 | 99.99th=[ 176] 00:43:14.570 bw ( KiB/s): min= 2048, max= 2192, per=4.21%, avg=2087.20, stdev=61.53, samples=20 00:43:14.570 iops : min= 512, max= 548, avg=521.80, stdev=15.38, samples=20 00:43:14.570 lat (msec) : 20=0.08%, 50=99.62%, 250=0.31% 00:43:14.570 cpu : usr=98.53%, sys=1.10%, ctx=16, majf=0, minf=9 00:43:14.570 IO depths : 1=5.8%, 2=11.8%, 4=24.3%, 8=51.3%, 16=6.8%, 32=0.0%, >=64=0.0% 00:43:14.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 complete : 0=0.0%, 4=94.0%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 issued rwts: total=5234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.570 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.570 filename1: (groupid=0, jobs=1): err= 0: pid=1307824: Mon Dec 16 03:04:43 2024 00:43:14.570 read: IOPS=515, BW=2063KiB/s (2112kB/s)(20.4MiB/10146msec) 00:43:14.570 slat (nsec): min=6270, max=68718, avg=27577.19, stdev=9022.00 00:43:14.570 clat (msec): min=19, max=172, avg=30.76, stdev= 7.84 00:43:14.570 lat (msec): min=19, max=172, avg=30.79, stdev= 7.84 00:43:14.570 clat percentiles (msec): 00:43:14.570 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.570 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.570 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:14.570 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 171], 99.95th=[ 171], 00:43:14.570 | 99.99th=[ 174] 00:43:14.570 bw ( KiB/s): min= 2048, max= 2176, per=4.21%, avg=2086.40, stdev=60.18, samples=20 00:43:14.570 iops : min= 512, max= 544, avg=521.60, stdev=15.05, samples=20 00:43:14.570 lat (msec) : 20=0.31%, 50=99.39%, 250=0.31% 00:43:14.570 cpu : usr=98.53%, sys=1.08%, ctx=14, majf=0, minf=9 00:43:14.570 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:14.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.570 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.570 filename1: (groupid=0, jobs=1): err= 0: pid=1307825: Mon Dec 16 03:04:43 2024 00:43:14.570 read: IOPS=513, BW=2055KiB/s (2104kB/s)(20.3MiB/10123msec) 00:43:14.570 slat (nsec): min=4772, max=48115, avg=23675.51, stdev=7410.07 00:43:14.570 clat (msec): min=24, max=173, avg=30.94, stdev= 8.06 00:43:14.570 lat (msec): min=24, max=173, avg=30.97, stdev= 8.06 00:43:14.570 clat percentiles (msec): 00:43:14.570 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.570 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.570 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:43:14.570 | 99.00th=[ 33], 99.50th=[ 56], 99.90th=[ 174], 99.95th=[ 174], 00:43:14.570 | 99.99th=[ 174] 00:43:14.570 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2073.75, stdev=78.49, samples=20 00:43:14.570 iops : min= 480, max= 544, avg=518.40, stdev=19.70, samples=20 00:43:14.570 lat (msec) : 50=99.38%, 100=0.31%, 250=0.31% 00:43:14.570 cpu : usr=98.48%, sys=1.14%, ctx=13, majf=0, minf=9 00:43:14.570 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:14.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.570 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.570 filename1: (groupid=0, jobs=1): err= 0: pid=1307827: Mon Dec 16 03:04:43 2024 00:43:14.570 read: IOPS=515, BW=2063KiB/s (2112kB/s)(20.4MiB/10146msec) 00:43:14.570 slat (nsec): min=6338, max=54944, avg=27335.18, stdev=8693.05 00:43:14.570 clat (msec): min=19, max=171, avg=30.77, stdev= 7.83 00:43:14.570 lat (msec): min=19, max=171, avg=30.80, stdev= 7.83 00:43:14.570 clat percentiles (msec): 00:43:14.570 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.570 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.570 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:14.570 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 171], 99.95th=[ 171], 00:43:14.570 | 99.99th=[ 171] 00:43:14.570 bw ( KiB/s): min= 2048, max= 2176, per=4.21%, avg=2086.60, stdev=60.05, samples=20 00:43:14.570 iops : min= 512, max= 544, avg=521.65, stdev=15.01, samples=20 00:43:14.570 lat (msec) : 20=0.31%, 50=99.39%, 250=0.31% 00:43:14.570 cpu : usr=98.49%, sys=1.14%, ctx=14, majf=0, minf=9 00:43:14.570 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:14.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.570 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.570 filename1: (groupid=0, jobs=1): err= 0: pid=1307828: Mon Dec 16 03:04:43 2024 00:43:14.570 read: IOPS=513, BW=2054KiB/s (2104kB/s)(20.3MiB/10125msec) 00:43:14.570 slat (nsec): min=5388, max=45168, avg=22660.08, stdev=6700.15 00:43:14.570 clat (msec): min=29, max=176, avg=30.93, stdev= 8.06 00:43:14.570 lat (msec): min=29, max=176, avg=30.96, stdev= 8.06 00:43:14.570 clat percentiles (msec): 00:43:14.570 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.570 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.570 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:43:14.570 | 99.00th=[ 32], 99.50th=[ 55], 99.90th=[ 174], 99.95th=[ 174], 00:43:14.570 | 99.99th=[ 178] 00:43:14.570 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2073.60, stdev=78.80, samples=20 00:43:14.570 iops : min= 480, max= 544, avg=518.40, stdev=19.70, samples=20 00:43:14.570 lat (msec) : 50=99.38%, 100=0.31%, 250=0.31% 00:43:14.570 cpu : usr=98.40%, sys=1.22%, ctx=15, majf=0, minf=9 00:43:14.570 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:14.570 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.570 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.570 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.570 filename1: (groupid=0, jobs=1): err= 0: pid=1307829: Mon Dec 16 03:04:43 2024 00:43:14.570 read: IOPS=517, BW=2072KiB/s (2122kB/s)(20.6MiB/10163msec) 00:43:14.570 slat (nsec): min=8006, max=57175, avg=26072.32, stdev=9051.31 00:43:14.570 clat (msec): min=12, max=171, avg=30.68, stdev= 7.93 00:43:14.570 lat (msec): min=12, max=171, avg=30.71, stdev= 7.93 00:43:14.570 clat percentiles (msec): 00:43:14.570 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.571 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.571 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:14.571 | 99.00th=[ 32], 99.50th=[ 34], 99.90th=[ 171], 99.95th=[ 171], 00:43:14.571 | 99.99th=[ 171] 00:43:14.571 bw ( KiB/s): min= 2048, max= 2304, per=4.23%, avg=2099.20, stdev=76.58, samples=20 00:43:14.571 iops : min= 512, max= 576, avg=524.80, stdev=19.14, samples=20 00:43:14.571 lat (msec) : 20=1.18%, 50=98.52%, 250=0.30% 00:43:14.571 cpu : usr=98.54%, sys=1.08%, ctx=18, majf=0, minf=9 00:43:14.571 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:14.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.571 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.571 filename1: (groupid=0, jobs=1): err= 0: pid=1307830: Mon Dec 16 03:04:43 2024 00:43:14.571 read: IOPS=538, BW=2154KiB/s (2206kB/s)(21.3MiB/10123msec) 00:43:14.571 slat (nsec): min=5055, max=74700, avg=16310.23, stdev=10328.38 00:43:14.571 clat (msec): min=16, max=173, avg=29.62, stdev= 9.01 00:43:14.571 lat (msec): min=16, max=173, avg=29.64, stdev= 9.02 00:43:14.571 clat percentiles (msec): 00:43:14.571 | 1.00th=[ 19], 5.00th=[ 21], 10.00th=[ 23], 20.00th=[ 26], 00:43:14.571 | 30.00th=[ 30], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.571 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 33], 95.00th=[ 37], 00:43:14.571 | 99.00th=[ 42], 99.50th=[ 56], 99.90th=[ 174], 99.95th=[ 174], 00:43:14.571 | 99.99th=[ 174] 00:43:14.571 bw ( KiB/s): min= 1920, max= 2464, per=4.38%, avg=2174.60, stdev=134.74, samples=20 00:43:14.571 iops : min= 480, max= 616, avg=543.65, stdev=33.68, samples=20 00:43:14.571 lat (msec) : 20=4.77%, 50=94.64%, 100=0.29%, 250=0.29% 00:43:14.571 cpu : usr=98.43%, sys=1.19%, ctx=13, majf=0, minf=9 00:43:14.571 IO depths : 1=1.1%, 2=2.3%, 4=7.0%, 8=75.5%, 16=14.0%, 32=0.0%, >=64=0.0% 00:43:14.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 complete : 0=0.0%, 4=89.8%, 8=7.3%, 16=3.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 issued rwts: total=5452,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.571 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.571 filename1: (groupid=0, jobs=1): err= 0: pid=1307831: Mon Dec 16 03:04:43 2024 00:43:14.571 read: IOPS=523, BW=2094KiB/s (2144kB/s)(20.6MiB/10055msec) 00:43:14.571 slat (nsec): min=7553, max=78629, avg=22485.71, stdev=17428.60 00:43:14.571 clat (usec): min=12419, max=83633, avg=30328.68, stdev=3495.15 00:43:14.571 lat (usec): min=12441, max=83665, avg=30351.17, stdev=3495.94 00:43:14.571 clat percentiles (usec): 00:43:14.571 | 1.00th=[15008], 5.00th=[29754], 10.00th=[30016], 20.00th=[30016], 00:43:14.571 | 30.00th=[30278], 40.00th=[30278], 50.00th=[30278], 60.00th=[30540], 00:43:14.571 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:43:14.571 | 99.00th=[31851], 99.50th=[33162], 99.90th=[83362], 99.95th=[83362], 00:43:14.571 | 99.99th=[83362] 00:43:14.571 bw ( KiB/s): min= 2048, max= 2304, per=4.23%, avg=2099.20, stdev=76.58, samples=20 00:43:14.571 iops : min= 512, max= 576, avg=524.80, stdev=19.14, samples=20 00:43:14.571 lat (msec) : 20=1.35%, 50=98.35%, 100=0.30% 00:43:14.571 cpu : usr=98.46%, sys=1.16%, ctx=13, majf=0, minf=9 00:43:14.571 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:14.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.571 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.571 filename2: (groupid=0, jobs=1): err= 0: pid=1307832: Mon Dec 16 03:04:43 2024 00:43:14.571 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.6MiB/10165msec) 00:43:14.571 slat (nsec): min=9315, max=61620, avg=27511.79, stdev=8717.49 00:43:14.571 clat (msec): min=12, max=173, avg=30.65, stdev= 7.95 00:43:14.571 lat (msec): min=12, max=173, avg=30.68, stdev= 7.95 00:43:14.571 clat percentiles (msec): 00:43:14.571 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.571 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.571 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 31], 95.00th=[ 32], 00:43:14.571 | 99.00th=[ 32], 99.50th=[ 34], 99.90th=[ 171], 99.95th=[ 171], 00:43:14.571 | 99.99th=[ 174] 00:43:14.571 bw ( KiB/s): min= 2048, max= 2304, per=4.23%, avg=2099.20, stdev=76.58, samples=20 00:43:14.571 iops : min= 512, max= 576, avg=524.80, stdev=19.14, samples=20 00:43:14.571 lat (msec) : 20=1.22%, 50=98.48%, 250=0.30% 00:43:14.571 cpu : usr=98.72%, sys=0.90%, ctx=13, majf=0, minf=9 00:43:14.571 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:14.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.571 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.571 filename2: (groupid=0, jobs=1): err= 0: pid=1307833: Mon Dec 16 03:04:43 2024 00:43:14.571 read: IOPS=513, BW=2054KiB/s (2104kB/s)(20.3MiB/10125msec) 00:43:14.571 slat (nsec): min=5210, max=43548, avg=21859.00, stdev=6067.62 00:43:14.571 clat (msec): min=29, max=176, avg=30.95, stdev= 8.06 00:43:14.571 lat (msec): min=29, max=176, avg=30.97, stdev= 8.06 00:43:14.571 clat percentiles (msec): 00:43:14.571 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.571 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.571 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:43:14.571 | 99.00th=[ 32], 99.50th=[ 55], 99.90th=[ 174], 99.95th=[ 174], 00:43:14.571 | 99.99th=[ 178] 00:43:14.571 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2073.60, stdev=78.80, samples=20 00:43:14.571 iops : min= 480, max= 544, avg=518.40, stdev=19.70, samples=20 00:43:14.571 lat (msec) : 50=99.38%, 100=0.31%, 250=0.31% 00:43:14.571 cpu : usr=98.38%, sys=1.24%, ctx=14, majf=0, minf=9 00:43:14.571 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:14.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.571 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.571 filename2: (groupid=0, jobs=1): err= 0: pid=1307834: Mon Dec 16 03:04:43 2024 00:43:14.571 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.6MiB/10165msec) 00:43:14.571 slat (nsec): min=8887, max=58505, avg=22797.24, stdev=8937.25 00:43:14.571 clat (msec): min=10, max=173, avg=30.71, stdev= 7.94 00:43:14.571 lat (msec): min=10, max=173, avg=30.73, stdev= 7.94 00:43:14.571 clat percentiles (msec): 00:43:14.571 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.571 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.571 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:43:14.571 | 99.00th=[ 32], 99.50th=[ 34], 99.90th=[ 171], 99.95th=[ 171], 00:43:14.571 | 99.99th=[ 174] 00:43:14.571 bw ( KiB/s): min= 2048, max= 2304, per=4.23%, avg=2099.20, stdev=76.58, samples=20 00:43:14.571 iops : min= 512, max= 576, avg=524.80, stdev=19.14, samples=20 00:43:14.571 lat (msec) : 20=1.22%, 50=98.48%, 250=0.30% 00:43:14.571 cpu : usr=98.48%, sys=1.14%, ctx=16, majf=0, minf=9 00:43:14.571 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:14.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.571 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.571 filename2: (groupid=0, jobs=1): err= 0: pid=1307835: Mon Dec 16 03:04:43 2024 00:43:14.571 read: IOPS=515, BW=2062KiB/s (2112kB/s)(20.4MiB/10147msec) 00:43:14.571 slat (nsec): min=6809, max=57161, avg=26681.54, stdev=8989.05 00:43:14.571 clat (msec): min=18, max=171, avg=30.80, stdev= 8.04 00:43:14.571 lat (msec): min=18, max=171, avg=30.83, stdev= 8.04 00:43:14.571 clat percentiles (msec): 00:43:14.571 | 1.00th=[ 21], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.571 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.571 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:43:14.571 | 99.00th=[ 41], 99.50th=[ 41], 99.90th=[ 171], 99.95th=[ 171], 00:43:14.571 | 99.99th=[ 171] 00:43:14.571 bw ( KiB/s): min= 2048, max= 2176, per=4.21%, avg=2086.40, stdev=56.96, samples=20 00:43:14.571 iops : min= 512, max= 544, avg=521.60, stdev=14.24, samples=20 00:43:14.571 lat (msec) : 20=0.61%, 50=99.08%, 250=0.31% 00:43:14.571 cpu : usr=98.51%, sys=1.10%, ctx=13, majf=0, minf=9 00:43:14.571 IO depths : 1=4.4%, 2=10.6%, 4=24.8%, 8=52.1%, 16=8.1%, 32=0.0%, >=64=0.0% 00:43:14.571 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.571 issued rwts: total=5232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.571 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.571 filename2: (groupid=0, jobs=1): err= 0: pid=1307836: Mon Dec 16 03:04:43 2024 00:43:14.571 read: IOPS=514, BW=2059KiB/s (2109kB/s)(20.4MiB/10132msec) 00:43:14.571 slat (nsec): min=5733, max=46994, avg=23089.13, stdev=7241.10 00:43:14.571 clat (msec): min=28, max=173, avg=30.88, stdev= 7.91 00:43:14.571 lat (msec): min=28, max=173, avg=30.90, stdev= 7.91 00:43:14.571 clat percentiles (msec): 00:43:14.571 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.571 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.571 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:43:14.571 | 99.00th=[ 32], 99.50th=[ 36], 99.90th=[ 174], 99.95th=[ 174], 00:43:14.571 | 99.99th=[ 174] 00:43:14.571 bw ( KiB/s): min= 1920, max= 2176, per=4.19%, avg=2080.00, stdev=70.42, samples=20 00:43:14.571 iops : min= 480, max= 544, avg=520.00, stdev=17.60, samples=20 00:43:14.571 lat (msec) : 50=99.69%, 250=0.31% 00:43:14.571 cpu : usr=98.45%, sys=1.17%, ctx=14, majf=0, minf=9 00:43:14.572 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:14.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.572 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.572 issued rwts: total=5216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.572 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.572 filename2: (groupid=0, jobs=1): err= 0: pid=1307838: Mon Dec 16 03:04:43 2024 00:43:14.572 read: IOPS=517, BW=2071KiB/s (2121kB/s)(20.6MiB/10165msec) 00:43:14.572 slat (nsec): min=7667, max=53185, avg=14212.69, stdev=5388.52 00:43:14.572 clat (msec): min=8, max=173, avg=30.77, stdev= 7.95 00:43:14.572 lat (msec): min=8, max=173, avg=30.78, stdev= 7.95 00:43:14.572 clat percentiles (msec): 00:43:14.572 | 1.00th=[ 20], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.572 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.572 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:43:14.572 | 99.00th=[ 32], 99.50th=[ 33], 99.90th=[ 171], 99.95th=[ 171], 00:43:14.572 | 99.99th=[ 174] 00:43:14.572 bw ( KiB/s): min= 2048, max= 2304, per=4.23%, avg=2099.20, stdev=76.58, samples=20 00:43:14.572 iops : min= 512, max= 576, avg=524.80, stdev=19.14, samples=20 00:43:14.572 lat (msec) : 10=0.04%, 20=1.14%, 50=98.52%, 250=0.30% 00:43:14.572 cpu : usr=98.55%, sys=1.04%, ctx=24, majf=0, minf=9 00:43:14.572 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:14.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.572 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.572 issued rwts: total=5264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.572 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.572 filename2: (groupid=0, jobs=1): err= 0: pid=1307839: Mon Dec 16 03:04:43 2024 00:43:14.572 read: IOPS=513, BW=2055KiB/s (2104kB/s)(20.3MiB/10121msec) 00:43:14.572 slat (nsec): min=4080, max=50024, avg=23108.31, stdev=7910.52 00:43:14.572 clat (msec): min=29, max=173, avg=30.95, stdev= 8.02 00:43:14.572 lat (msec): min=29, max=173, avg=30.97, stdev= 8.01 00:43:14.572 clat percentiles (msec): 00:43:14.572 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 31], 20.00th=[ 31], 00:43:14.572 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 31], 00:43:14.572 | 70.00th=[ 31], 80.00th=[ 31], 90.00th=[ 32], 95.00th=[ 32], 00:43:14.572 | 99.00th=[ 32], 99.50th=[ 55], 99.90th=[ 174], 99.95th=[ 174], 00:43:14.572 | 99.99th=[ 174] 00:43:14.572 bw ( KiB/s): min= 1920, max= 2176, per=4.18%, avg=2073.60, stdev=78.80, samples=20 00:43:14.572 iops : min= 480, max= 544, avg=518.40, stdev=19.70, samples=20 00:43:14.572 lat (msec) : 50=99.38%, 100=0.31%, 250=0.31% 00:43:14.572 cpu : usr=98.64%, sys=0.97%, ctx=12, majf=0, minf=9 00:43:14.572 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:14.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.572 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.572 issued rwts: total=5200,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.572 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.572 filename2: (groupid=0, jobs=1): err= 0: pid=1307840: Mon Dec 16 03:04:43 2024 00:43:14.572 read: IOPS=517, BW=2070KiB/s (2120kB/s)(20.2MiB/10017msec) 00:43:14.572 slat (nsec): min=7293, max=82703, avg=33321.28, stdev=19987.97 00:43:14.572 clat (usec): min=16203, max=84277, avg=30611.65, stdev=3807.98 00:43:14.572 lat (usec): min=16218, max=84303, avg=30644.97, stdev=3806.44 00:43:14.572 clat percentiles (usec): 00:43:14.572 | 1.00th=[29230], 5.00th=[29492], 10.00th=[29754], 20.00th=[30016], 00:43:14.572 | 30.00th=[30016], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:14.572 | 70.00th=[30540], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:43:14.572 | 99.00th=[32113], 99.50th=[71828], 99.90th=[84411], 99.95th=[84411], 00:43:14.572 | 99.99th=[84411] 00:43:14.572 bw ( KiB/s): min= 1795, max= 2176, per=4.17%, avg=2067.35, stdev=94.93, samples=20 00:43:14.572 iops : min= 448, max= 544, avg=516.80, stdev=23.85, samples=20 00:43:14.572 lat (msec) : 20=0.04%, 50=99.34%, 100=0.62% 00:43:14.572 cpu : usr=98.48%, sys=0.96%, ctx=62, majf=0, minf=9 00:43:14.572 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:14.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.572 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:14.572 issued rwts: total=5184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:14.572 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:14.572 00:43:14.572 Run status group 0 (all jobs): 00:43:14.572 READ: bw=48.4MiB/s (50.8MB/s), 2054KiB/s-2161KiB/s (2104kB/s-2213kB/s), io=492MiB (516MB), run=10009-10165msec 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.572 bdev_null0 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.572 [2024-12-16 03:04:43.802330] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:14.572 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.573 bdev_null1 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:14.573 { 00:43:14.573 "params": { 00:43:14.573 "name": "Nvme$subsystem", 00:43:14.573 "trtype": "$TEST_TRANSPORT", 00:43:14.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:14.573 "adrfam": "ipv4", 00:43:14.573 "trsvcid": "$NVMF_PORT", 00:43:14.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:14.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:14.573 "hdgst": ${hdgst:-false}, 00:43:14.573 "ddgst": ${ddgst:-false} 00:43:14.573 }, 00:43:14.573 "method": "bdev_nvme_attach_controller" 00:43:14.573 } 00:43:14.573 EOF 00:43:14.573 )") 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:14.573 { 00:43:14.573 "params": { 00:43:14.573 "name": "Nvme$subsystem", 00:43:14.573 "trtype": "$TEST_TRANSPORT", 00:43:14.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:14.573 "adrfam": "ipv4", 00:43:14.573 "trsvcid": "$NVMF_PORT", 00:43:14.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:14.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:14.573 "hdgst": ${hdgst:-false}, 00:43:14.573 "ddgst": ${ddgst:-false} 00:43:14.573 }, 00:43:14.573 "method": "bdev_nvme_attach_controller" 00:43:14.573 } 00:43:14.573 EOF 00:43:14.573 )") 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:14.573 "params": { 00:43:14.573 "name": "Nvme0", 00:43:14.573 "trtype": "tcp", 00:43:14.573 "traddr": "10.0.0.2", 00:43:14.573 "adrfam": "ipv4", 00:43:14.573 "trsvcid": "4420", 00:43:14.573 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:14.573 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:14.573 "hdgst": false, 00:43:14.573 "ddgst": false 00:43:14.573 }, 00:43:14.573 "method": "bdev_nvme_attach_controller" 00:43:14.573 },{ 00:43:14.573 "params": { 00:43:14.573 "name": "Nvme1", 00:43:14.573 "trtype": "tcp", 00:43:14.573 "traddr": "10.0.0.2", 00:43:14.573 "adrfam": "ipv4", 00:43:14.573 "trsvcid": "4420", 00:43:14.573 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:14.573 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:14.573 "hdgst": false, 00:43:14.573 "ddgst": false 00:43:14.573 }, 00:43:14.573 "method": "bdev_nvme_attach_controller" 00:43:14.573 }' 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:14.573 03:04:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:14.573 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:14.573 ... 00:43:14.573 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:14.573 ... 00:43:14.573 fio-3.35 00:43:14.573 Starting 4 threads 00:43:19.846 00:43:19.846 filename0: (groupid=0, jobs=1): err= 0: pid=1309796: Mon Dec 16 03:04:50 2024 00:43:19.846 read: IOPS=2766, BW=21.6MiB/s (22.7MB/s)(108MiB/5003msec) 00:43:19.846 slat (nsec): min=6156, max=39036, avg=8458.63, stdev=2752.50 00:43:19.846 clat (usec): min=984, max=4972, avg=2866.59, stdev=359.16 00:43:19.846 lat (usec): min=996, max=4983, avg=2875.05, stdev=358.95 00:43:19.846 clat percentiles (usec): 00:43:19.846 | 1.00th=[ 1893], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2573], 00:43:19.846 | 30.00th=[ 2737], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 2966], 00:43:19.846 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3228], 95.00th=[ 3392], 00:43:19.846 | 99.00th=[ 3949], 99.50th=[ 4178], 99.90th=[ 4555], 99.95th=[ 4752], 00:43:19.846 | 99.99th=[ 4948] 00:43:19.846 bw ( KiB/s): min=21456, max=23392, per=26.23%, avg=22181.33, stdev=677.88, samples=9 00:43:19.846 iops : min= 2682, max= 2924, avg=2772.67, stdev=84.77, samples=9 00:43:19.846 lat (usec) : 1000=0.01% 00:43:19.846 lat (msec) : 2=1.33%, 4=97.73%, 10=0.93% 00:43:19.846 cpu : usr=95.84%, sys=3.84%, ctx=8, majf=0, minf=0 00:43:19.846 IO depths : 1=0.1%, 2=2.5%, 4=69.1%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:19.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:19.846 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:19.846 issued rwts: total=13842,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:19.846 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:19.846 filename0: (groupid=0, jobs=1): err= 0: pid=1309797: Mon Dec 16 03:04:50 2024 00:43:19.846 read: IOPS=2574, BW=20.1MiB/s (21.1MB/s)(101MiB/5001msec) 00:43:19.846 slat (nsec): min=6146, max=32246, avg=8350.32, stdev=2790.88 00:43:19.846 clat (usec): min=1016, max=6041, avg=3082.76, stdev=391.28 00:43:19.846 lat (usec): min=1028, max=6048, avg=3091.11, stdev=391.14 00:43:19.846 clat percentiles (usec): 00:43:19.846 | 1.00th=[ 2180], 5.00th=[ 2573], 10.00th=[ 2802], 20.00th=[ 2933], 00:43:19.846 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:43:19.846 | 70.00th=[ 3130], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3752], 00:43:19.846 | 99.00th=[ 4555], 99.50th=[ 4948], 99.90th=[ 5276], 99.95th=[ 5407], 00:43:19.846 | 99.99th=[ 6063] 00:43:19.846 bw ( KiB/s): min=19680, max=21312, per=24.30%, avg=20545.78, stdev=512.40, samples=9 00:43:19.846 iops : min= 2460, max= 2664, avg=2568.22, stdev=64.05, samples=9 00:43:19.846 lat (msec) : 2=0.68%, 4=96.17%, 10=3.15% 00:43:19.846 cpu : usr=96.22%, sys=3.48%, ctx=8, majf=0, minf=9 00:43:19.846 IO depths : 1=0.1%, 2=1.1%, 4=72.4%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:19.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:19.846 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:19.846 issued rwts: total=12876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:19.846 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:19.846 filename1: (groupid=0, jobs=1): err= 0: pid=1309799: Mon Dec 16 03:04:50 2024 00:43:19.846 read: IOPS=2563, BW=20.0MiB/s (21.0MB/s)(101MiB/5041msec) 00:43:19.846 slat (nsec): min=6156, max=37173, avg=8591.68, stdev=2872.55 00:43:19.846 clat (usec): min=668, max=40823, avg=3081.09, stdev=687.37 00:43:19.846 lat (usec): min=678, max=40831, avg=3089.68, stdev=687.24 00:43:19.846 clat percentiles (usec): 00:43:19.846 | 1.00th=[ 2147], 5.00th=[ 2573], 10.00th=[ 2769], 20.00th=[ 2933], 00:43:19.846 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 2999], 00:43:19.846 | 70.00th=[ 3097], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3752], 00:43:19.846 | 99.00th=[ 4424], 99.50th=[ 4752], 99.90th=[ 5211], 99.95th=[ 5473], 00:43:19.846 | 99.99th=[40633] 00:43:19.846 bw ( KiB/s): min=20080, max=21312, per=24.37%, avg=20605.33, stdev=459.70, samples=9 00:43:19.846 iops : min= 2510, max= 2664, avg=2575.67, stdev=57.46, samples=9 00:43:19.846 lat (usec) : 750=0.01%, 1000=0.04% 00:43:19.846 lat (msec) : 2=0.52%, 4=96.47%, 10=2.94%, 50=0.02% 00:43:19.846 cpu : usr=96.35%, sys=3.35%, ctx=9, majf=0, minf=9 00:43:19.846 IO depths : 1=0.1%, 2=1.5%, 4=70.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:19.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:19.846 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:19.846 issued rwts: total=12921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:19.846 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:19.846 filename1: (groupid=0, jobs=1): err= 0: pid=1309800: Mon Dec 16 03:04:50 2024 00:43:19.846 read: IOPS=2728, BW=21.3MiB/s (22.3MB/s)(107MiB/5002msec) 00:43:19.846 slat (nsec): min=6148, max=33093, avg=8743.51, stdev=2789.71 00:43:19.846 clat (usec): min=604, max=5512, avg=2908.10, stdev=397.55 00:43:19.846 lat (usec): min=615, max=5523, avg=2916.85, stdev=397.43 00:43:19.846 clat percentiles (usec): 00:43:19.846 | 1.00th=[ 1958], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2638], 00:43:19.846 | 30.00th=[ 2769], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2966], 00:43:19.846 | 70.00th=[ 2999], 80.00th=[ 3032], 90.00th=[ 3294], 95.00th=[ 3589], 00:43:19.846 | 99.00th=[ 4228], 99.50th=[ 4424], 99.90th=[ 5080], 99.95th=[ 5276], 00:43:19.846 | 99.99th=[ 5342] 00:43:19.846 bw ( KiB/s): min=21408, max=22688, per=25.88%, avg=21884.44, stdev=423.53, samples=9 00:43:19.846 iops : min= 2676, max= 2836, avg=2735.56, stdev=52.94, samples=9 00:43:19.846 lat (usec) : 750=0.01% 00:43:19.846 lat (msec) : 2=1.16%, 4=96.91%, 10=1.92% 00:43:19.846 cpu : usr=95.96%, sys=3.72%, ctx=7, majf=0, minf=0 00:43:19.846 IO depths : 1=0.2%, 2=3.1%, 4=66.3%, 8=30.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:19.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:19.846 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:19.846 issued rwts: total=13646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:19.846 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:19.846 00:43:19.846 Run status group 0 (all jobs): 00:43:19.846 READ: bw=82.6MiB/s (86.6MB/s), 20.0MiB/s-21.6MiB/s (21.0MB/s-22.7MB/s), io=416MiB (437MB), run=5001-5041msec 00:43:19.846 03:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:19.846 03:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:19.846 03:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:19.846 03:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:19.846 03:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:19.846 03:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:19.846 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.846 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:19.846 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.847 00:43:19.847 real 0m24.513s 00:43:19.847 user 4m54.866s 00:43:19.847 sys 0m5.040s 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:19.847 03:04:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:19.847 ************************************ 00:43:19.847 END TEST fio_dif_rand_params 00:43:19.847 ************************************ 00:43:19.847 03:04:50 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:19.847 03:04:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:19.847 03:04:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:19.847 03:04:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:19.847 ************************************ 00:43:19.847 START TEST fio_dif_digest 00:43:19.847 ************************************ 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:19.847 bdev_null0 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:19.847 [2024-12-16 03:04:50.441619] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:19.847 { 00:43:19.847 "params": { 00:43:19.847 "name": "Nvme$subsystem", 00:43:19.847 "trtype": "$TEST_TRANSPORT", 00:43:19.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:19.847 "adrfam": "ipv4", 00:43:19.847 "trsvcid": "$NVMF_PORT", 00:43:19.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:19.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:19.847 "hdgst": ${hdgst:-false}, 00:43:19.847 "ddgst": ${ddgst:-false} 00:43:19.847 }, 00:43:19.847 "method": "bdev_nvme_attach_controller" 00:43:19.847 } 00:43:19.847 EOF 00:43:19.847 )") 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:19.847 "params": { 00:43:19.847 "name": "Nvme0", 00:43:19.847 "trtype": "tcp", 00:43:19.847 "traddr": "10.0.0.2", 00:43:19.847 "adrfam": "ipv4", 00:43:19.847 "trsvcid": "4420", 00:43:19.847 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:19.847 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:19.847 "hdgst": true, 00:43:19.847 "ddgst": true 00:43:19.847 }, 00:43:19.847 "method": "bdev_nvme_attach_controller" 00:43:19.847 }' 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:19.847 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:20.127 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:20.127 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:20.127 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:20.127 03:04:50 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:20.385 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:20.385 ... 00:43:20.385 fio-3.35 00:43:20.385 Starting 3 threads 00:43:32.669 00:43:32.669 filename0: (groupid=0, jobs=1): err= 0: pid=1310947: Mon Dec 16 03:05:01 2024 00:43:32.669 read: IOPS=291, BW=36.5MiB/s (38.3MB/s)(367MiB/10047msec) 00:43:32.669 slat (nsec): min=6477, max=40616, avg=12601.90, stdev=4122.82 00:43:32.669 clat (usec): min=5652, max=52842, avg=10246.37, stdev=1270.70 00:43:32.669 lat (usec): min=5662, max=52859, avg=10258.97, stdev=1270.73 00:43:32.669 clat percentiles (usec): 00:43:32.669 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:43:32.669 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10290], 60.00th=[10421], 00:43:32.669 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11076], 95.00th=[11338], 00:43:32.669 | 99.00th=[11863], 99.50th=[12125], 99.90th=[12911], 99.95th=[48497], 00:43:32.669 | 99.99th=[52691] 00:43:32.669 bw ( KiB/s): min=36096, max=38656, per=35.68%, avg=37516.80, stdev=578.28, samples=20 00:43:32.669 iops : min= 282, max= 302, avg=293.10, stdev= 4.52, samples=20 00:43:32.669 lat (msec) : 10=36.28%, 20=63.65%, 50=0.03%, 100=0.03% 00:43:32.669 cpu : usr=94.69%, sys=5.01%, ctx=15, majf=0, minf=71 00:43:32.669 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:32.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.669 issued rwts: total=2933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.669 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:32.669 filename0: (groupid=0, jobs=1): err= 0: pid=1310948: Mon Dec 16 03:05:01 2024 00:43:32.669 read: IOPS=269, BW=33.7MiB/s (35.3MB/s)(337MiB/10005msec) 00:43:32.669 slat (nsec): min=6507, max=40911, avg=12875.83, stdev=4239.29 00:43:32.669 clat (usec): min=7978, max=14483, avg=11109.18, stdev=750.73 00:43:32.669 lat (usec): min=7991, max=14508, avg=11122.06, stdev=750.65 00:43:32.669 clat percentiles (usec): 00:43:32.669 | 1.00th=[ 9372], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[10552], 00:43:32.669 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:43:32.669 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12387], 00:43:32.669 | 99.00th=[13042], 99.50th=[13435], 99.90th=[13960], 99.95th=[14091], 00:43:32.669 | 99.99th=[14484] 00:43:32.669 bw ( KiB/s): min=34048, max=35328, per=32.82%, avg=34506.11, stdev=439.93, samples=19 00:43:32.669 iops : min= 266, max= 276, avg=269.58, stdev= 3.44, samples=19 00:43:32.669 lat (msec) : 10=5.86%, 20=94.14% 00:43:32.669 cpu : usr=94.98%, sys=4.73%, ctx=16, majf=0, minf=110 00:43:32.669 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:32.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.669 issued rwts: total=2698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.669 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:32.669 filename0: (groupid=0, jobs=1): err= 0: pid=1310949: Mon Dec 16 03:05:01 2024 00:43:32.669 read: IOPS=261, BW=32.6MiB/s (34.2MB/s)(328MiB/10045msec) 00:43:32.669 slat (nsec): min=6508, max=53548, avg=13076.06, stdev=4058.75 00:43:32.669 clat (usec): min=8677, max=48860, avg=11461.73, stdev=1267.14 00:43:32.669 lat (usec): min=8688, max=48888, avg=11474.80, stdev=1267.61 00:43:32.669 clat percentiles (usec): 00:43:32.669 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10552], 20.00th=[10814], 00:43:32.669 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:43:32.669 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12518], 95.00th=[12780], 00:43:32.669 | 99.00th=[13435], 99.50th=[13829], 99.90th=[16581], 99.95th=[44827], 00:43:32.669 | 99.99th=[49021] 00:43:32.669 bw ( KiB/s): min=32256, max=34816, per=31.89%, avg=33536.00, stdev=563.32, samples=20 00:43:32.669 iops : min= 252, max= 272, avg=262.00, stdev= 4.40, samples=20 00:43:32.669 lat (msec) : 10=3.13%, 20=96.80%, 50=0.08% 00:43:32.669 cpu : usr=95.40%, sys=4.25%, ctx=86, majf=0, minf=55 00:43:32.669 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:32.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:32.669 issued rwts: total=2622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:32.669 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:32.669 00:43:32.669 Run status group 0 (all jobs): 00:43:32.669 READ: bw=103MiB/s (108MB/s), 32.6MiB/s-36.5MiB/s (34.2MB/s-38.3MB/s), io=1032MiB (1082MB), run=10005-10047msec 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:32.669 00:43:32.669 real 0m11.149s 00:43:32.669 user 0m35.331s 00:43:32.669 sys 0m1.695s 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:32.669 03:05:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:32.669 ************************************ 00:43:32.669 END TEST fio_dif_digest 00:43:32.669 ************************************ 00:43:32.669 03:05:01 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:32.669 03:05:01 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:32.669 03:05:01 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:32.669 03:05:01 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:32.669 03:05:01 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:32.669 03:05:01 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:32.669 03:05:01 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:32.669 03:05:01 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:32.669 rmmod nvme_tcp 00:43:32.669 rmmod nvme_fabrics 00:43:32.669 rmmod nvme_keyring 00:43:32.669 03:05:01 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:32.669 03:05:01 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:32.669 03:05:01 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:32.669 03:05:01 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1302579 ']' 00:43:32.669 03:05:01 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1302579 00:43:32.669 03:05:01 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1302579 ']' 00:43:32.669 03:05:01 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1302579 00:43:32.670 03:05:01 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:32.670 03:05:01 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:32.670 03:05:01 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1302579 00:43:32.670 03:05:01 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:32.670 03:05:01 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:32.670 03:05:01 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1302579' 00:43:32.670 killing process with pid 1302579 00:43:32.670 03:05:01 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1302579 00:43:32.670 03:05:01 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1302579 00:43:32.670 03:05:01 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:32.670 03:05:01 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:34.048 Waiting for block devices as requested 00:43:34.049 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:34.049 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:34.308 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:34.308 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:34.308 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:34.567 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:34.567 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:34.567 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:34.567 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:34.826 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:34.826 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:34.826 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:35.084 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:35.084 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:35.084 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:35.343 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:35.343 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:35.343 03:05:05 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:35.343 03:05:05 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:35.343 03:05:05 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:35.343 03:05:05 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:35.343 03:05:05 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:35.343 03:05:05 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:35.343 03:05:05 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:35.343 03:05:05 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:35.343 03:05:05 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:35.343 03:05:05 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:35.343 03:05:05 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:37.879 03:05:07 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:37.879 00:43:37.879 real 1m14.163s 00:43:37.879 user 7m12.699s 00:43:37.879 sys 0m20.600s 00:43:37.879 03:05:07 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:37.879 03:05:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:37.879 ************************************ 00:43:37.879 END TEST nvmf_dif 00:43:37.879 ************************************ 00:43:37.879 03:05:08 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:37.879 03:05:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:37.879 03:05:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:37.879 03:05:08 -- common/autotest_common.sh@10 -- # set +x 00:43:37.879 ************************************ 00:43:37.879 START TEST nvmf_abort_qd_sizes 00:43:37.879 ************************************ 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:37.879 * Looking for test storage... 00:43:37.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:37.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:37.879 --rc genhtml_branch_coverage=1 00:43:37.879 --rc genhtml_function_coverage=1 00:43:37.879 --rc genhtml_legend=1 00:43:37.879 --rc geninfo_all_blocks=1 00:43:37.879 --rc geninfo_unexecuted_blocks=1 00:43:37.879 00:43:37.879 ' 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:37.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:37.879 --rc genhtml_branch_coverage=1 00:43:37.879 --rc genhtml_function_coverage=1 00:43:37.879 --rc genhtml_legend=1 00:43:37.879 --rc geninfo_all_blocks=1 00:43:37.879 --rc geninfo_unexecuted_blocks=1 00:43:37.879 00:43:37.879 ' 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:37.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:37.879 --rc genhtml_branch_coverage=1 00:43:37.879 --rc genhtml_function_coverage=1 00:43:37.879 --rc genhtml_legend=1 00:43:37.879 --rc geninfo_all_blocks=1 00:43:37.879 --rc geninfo_unexecuted_blocks=1 00:43:37.879 00:43:37.879 ' 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:37.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:37.879 --rc genhtml_branch_coverage=1 00:43:37.879 --rc genhtml_function_coverage=1 00:43:37.879 --rc genhtml_legend=1 00:43:37.879 --rc geninfo_all_blocks=1 00:43:37.879 --rc geninfo_unexecuted_blocks=1 00:43:37.879 00:43:37.879 ' 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:37.879 03:05:08 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:37.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:37.880 03:05:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:44.453 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:44.453 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:44.453 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:44.454 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:44.454 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:44.454 Found net devices under 0000:af:00.0: cvl_0_0 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:44.454 Found net devices under 0000:af:00.1: cvl_0_1 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:44.454 03:05:13 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:44.454 03:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:44.454 03:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:44.454 03:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:44.454 03:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:44.454 03:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:44.454 03:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:44.454 03:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:44.454 03:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:44.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:44.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:43:44.454 00:43:44.454 --- 10.0.0.2 ping statistics --- 00:43:44.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:44.454 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:43:44.454 03:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:44.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:44.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:43:44.454 00:43:44.454 --- 10.0.0.1 ping statistics --- 00:43:44.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:44.454 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:43:44.454 03:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:44.454 03:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:44.454 03:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:44.454 03:05:14 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:46.358 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:46.358 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:46.358 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:46.358 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:46.358 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:46.617 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:46.617 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:46.617 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:46.617 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:46.617 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:46.617 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:46.617 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:46.617 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:46.617 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:46.617 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:46.617 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:47.554 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1318729 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1318729 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1318729 ']' 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:47.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:47.554 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:47.554 [2024-12-16 03:05:18.165222] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:43:47.554 [2024-12-16 03:05:18.165262] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:47.813 [2024-12-16 03:05:18.244250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:47.813 [2024-12-16 03:05:18.268338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:47.813 [2024-12-16 03:05:18.268377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:47.813 [2024-12-16 03:05:18.268385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:47.813 [2024-12-16 03:05:18.268391] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:47.813 [2024-12-16 03:05:18.268398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:47.813 [2024-12-16 03:05:18.269718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:47.813 [2024-12-16 03:05:18.269828] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:47.813 [2024-12-16 03:05:18.269942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:47.813 [2024-12-16 03:05:18.269942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:47.813 03:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:47.813 ************************************ 00:43:47.813 START TEST spdk_target_abort 00:43:47.813 ************************************ 00:43:47.813 03:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:47.813 03:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:47.813 03:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:43:47.813 03:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:47.813 03:05:18 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:51.099 spdk_targetn1 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:51.100 [2024-12-16 03:05:21.274027] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:51.100 [2024-12-16 03:05:21.322402] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:51.100 03:05:21 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:54.385 Initializing NVMe Controllers 00:43:54.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:54.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:54.385 Initialization complete. Launching workers. 00:43:54.385 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15759, failed: 0 00:43:54.385 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1372, failed to submit 14387 00:43:54.385 success 718, unsuccessful 654, failed 0 00:43:54.385 03:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:54.385 03:05:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:57.668 Initializing NVMe Controllers 00:43:57.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:57.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:57.668 Initialization complete. Launching workers. 00:43:57.668 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8454, failed: 0 00:43:57.668 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1244, failed to submit 7210 00:43:57.668 success 321, unsuccessful 923, failed 0 00:43:57.668 03:05:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:57.668 03:05:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:00.955 Initializing NVMe Controllers 00:44:00.955 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:00.955 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:00.955 Initialization complete. Launching workers. 00:44:00.955 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38756, failed: 0 00:44:00.955 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2806, failed to submit 35950 00:44:00.955 success 617, unsuccessful 2189, failed 0 00:44:00.955 03:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:00.955 03:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.955 03:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:00.955 03:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:00.955 03:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:00.955 03:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:00.955 03:05:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1318729 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1318729 ']' 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1318729 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1318729 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1318729' 00:44:01.891 killing process with pid 1318729 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1318729 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1318729 00:44:01.891 00:44:01.891 real 0m14.073s 00:44:01.891 user 0m53.974s 00:44:01.891 sys 0m2.219s 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:01.891 03:05:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:01.891 ************************************ 00:44:01.891 END TEST spdk_target_abort 00:44:01.891 ************************************ 00:44:02.150 03:05:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:02.150 03:05:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:02.150 03:05:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:02.150 03:05:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:02.150 ************************************ 00:44:02.150 START TEST kernel_target_abort 00:44:02.150 ************************************ 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:02.150 03:05:32 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:04.686 Waiting for block devices as requested 00:44:04.686 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:04.945 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:04.945 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:05.203 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:05.203 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:05.203 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:05.203 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:05.463 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:05.463 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:05.463 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:05.722 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:05.722 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:05.722 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:05.722 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:05.980 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:05.980 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:05.980 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:06.239 No valid GPT data, bailing 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:44:06.239 00:44:06.239 Discovery Log Number of Records 2, Generation counter 2 00:44:06.239 =====Discovery Log Entry 0====== 00:44:06.239 trtype: tcp 00:44:06.239 adrfam: ipv4 00:44:06.239 subtype: current discovery subsystem 00:44:06.239 treq: not specified, sq flow control disable supported 00:44:06.239 portid: 1 00:44:06.239 trsvcid: 4420 00:44:06.239 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:06.239 traddr: 10.0.0.1 00:44:06.239 eflags: none 00:44:06.239 sectype: none 00:44:06.239 =====Discovery Log Entry 1====== 00:44:06.239 trtype: tcp 00:44:06.239 adrfam: ipv4 00:44:06.239 subtype: nvme subsystem 00:44:06.239 treq: not specified, sq flow control disable supported 00:44:06.239 portid: 1 00:44:06.239 trsvcid: 4420 00:44:06.239 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:06.239 traddr: 10.0.0.1 00:44:06.239 eflags: none 00:44:06.239 sectype: none 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:06.239 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:06.240 03:05:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:09.527 Initializing NVMe Controllers 00:44:09.527 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:09.527 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:09.527 Initialization complete. Launching workers. 00:44:09.527 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96516, failed: 0 00:44:09.527 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 96516, failed to submit 0 00:44:09.527 success 0, unsuccessful 96516, failed 0 00:44:09.527 03:05:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:09.527 03:05:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:12.815 Initializing NVMe Controllers 00:44:12.815 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:12.815 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:12.815 Initialization complete. Launching workers. 00:44:12.815 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 151065, failed: 0 00:44:12.815 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37878, failed to submit 113187 00:44:12.815 success 0, unsuccessful 37878, failed 0 00:44:12.815 03:05:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:12.815 03:05:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:16.103 Initializing NVMe Controllers 00:44:16.103 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:16.103 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:16.103 Initialization complete. Launching workers. 00:44:16.103 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 143854, failed: 0 00:44:16.103 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36014, failed to submit 107840 00:44:16.103 success 0, unsuccessful 36014, failed 0 00:44:16.103 03:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:44:16.103 03:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:44:16.103 03:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:44:16.103 03:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:16.103 03:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:16.103 03:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:44:16.103 03:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:16.104 03:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:44:16.104 03:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:44:16.104 03:05:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:18.762 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:18.762 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:19.332 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:44:19.592 00:44:19.592 real 0m17.499s 00:44:19.592 user 0m9.124s 00:44:19.592 sys 0m5.044s 00:44:19.592 03:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:19.592 03:05:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:19.592 ************************************ 00:44:19.592 END TEST kernel_target_abort 00:44:19.592 ************************************ 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:19.592 rmmod nvme_tcp 00:44:19.592 rmmod nvme_fabrics 00:44:19.592 rmmod nvme_keyring 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1318729 ']' 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1318729 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1318729 ']' 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1318729 00:44:19.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1318729) - No such process 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1318729 is not found' 00:44:19.592 Process with pid 1318729 is not found 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:19.592 03:05:50 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:22.886 Waiting for block devices as requested 00:44:22.886 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:22.886 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:22.886 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:22.886 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:22.886 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:22.886 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:22.886 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:22.886 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:23.146 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:23.146 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:23.146 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:23.146 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:23.405 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:23.405 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:23.405 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:23.664 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:23.664 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:23.664 03:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:23.664 03:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:23.664 03:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:44:23.664 03:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:44:23.664 03:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:23.664 03:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:44:23.664 03:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:23.664 03:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:23.664 03:05:54 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:23.664 03:05:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:23.664 03:05:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:26.203 03:05:56 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:26.203 00:44:26.203 real 0m48.266s 00:44:26.203 user 1m7.495s 00:44:26.203 sys 0m15.933s 00:44:26.203 03:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:26.203 03:05:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:26.203 ************************************ 00:44:26.203 END TEST nvmf_abort_qd_sizes 00:44:26.203 ************************************ 00:44:26.203 03:05:56 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:26.203 03:05:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:26.203 03:05:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:26.203 03:05:56 -- common/autotest_common.sh@10 -- # set +x 00:44:26.203 ************************************ 00:44:26.203 START TEST keyring_file 00:44:26.203 ************************************ 00:44:26.203 03:05:56 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:26.203 * Looking for test storage... 00:44:26.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:26.203 03:05:56 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:26.203 03:05:56 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:44:26.203 03:05:56 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:26.203 03:05:56 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:26.203 03:05:56 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:26.204 03:05:56 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:26.204 03:05:56 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:26.204 03:05:56 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:26.204 03:05:56 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:26.204 03:05:56 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:26.204 03:05:56 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:26.204 03:05:56 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:26.204 03:05:56 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:26.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:26.204 --rc genhtml_branch_coverage=1 00:44:26.204 --rc genhtml_function_coverage=1 00:44:26.204 --rc genhtml_legend=1 00:44:26.204 --rc geninfo_all_blocks=1 00:44:26.204 --rc geninfo_unexecuted_blocks=1 00:44:26.204 00:44:26.204 ' 00:44:26.204 03:05:56 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:26.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:26.204 --rc genhtml_branch_coverage=1 00:44:26.204 --rc genhtml_function_coverage=1 00:44:26.204 --rc genhtml_legend=1 00:44:26.204 --rc geninfo_all_blocks=1 00:44:26.204 --rc geninfo_unexecuted_blocks=1 00:44:26.204 00:44:26.204 ' 00:44:26.204 03:05:56 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:26.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:26.204 --rc genhtml_branch_coverage=1 00:44:26.204 --rc genhtml_function_coverage=1 00:44:26.204 --rc genhtml_legend=1 00:44:26.204 --rc geninfo_all_blocks=1 00:44:26.204 --rc geninfo_unexecuted_blocks=1 00:44:26.204 00:44:26.204 ' 00:44:26.204 03:05:56 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:26.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:26.204 --rc genhtml_branch_coverage=1 00:44:26.204 --rc genhtml_function_coverage=1 00:44:26.204 --rc genhtml_legend=1 00:44:26.204 --rc geninfo_all_blocks=1 00:44:26.204 --rc geninfo_unexecuted_blocks=1 00:44:26.204 00:44:26.204 ' 00:44:26.204 03:05:56 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:26.204 03:05:56 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:26.204 03:05:56 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:26.204 03:05:56 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:26.204 03:05:56 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:26.204 03:05:56 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:26.204 03:05:56 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:26.204 03:05:56 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:26.204 03:05:56 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:26.204 03:05:56 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@51 -- # : 0 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:26.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:26.204 03:05:56 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:26.204 03:05:56 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:26.204 03:05:56 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:26.204 03:05:56 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:26.204 03:05:56 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:26.204 03:05:56 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HB3QOd3aso 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HB3QOd3aso 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HB3QOd3aso 00:44:26.204 03:05:56 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.HB3QOd3aso 00:44:26.204 03:05:56 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CFvQwMIEw8 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:26.204 03:05:56 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CFvQwMIEw8 00:44:26.204 03:05:56 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CFvQwMIEw8 00:44:26.204 03:05:56 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.CFvQwMIEw8 00:44:26.204 03:05:56 keyring_file -- keyring/file.sh@30 -- # tgtpid=1327184 00:44:26.204 03:05:56 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:26.204 03:05:56 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1327184 00:44:26.204 03:05:56 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1327184 ']' 00:44:26.205 03:05:56 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:26.205 03:05:56 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:26.205 03:05:56 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:26.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:26.205 03:05:56 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:26.205 03:05:56 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:26.205 [2024-12-16 03:05:56.773665] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:26.205 [2024-12-16 03:05:56.773716] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327184 ] 00:44:26.205 [2024-12-16 03:05:56.845511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:26.464 [2024-12-16 03:05:56.876784] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:26.464 03:05:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:26.464 03:05:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:26.464 03:05:57 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:26.464 03:05:57 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.464 03:05:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:26.464 [2024-12-16 03:05:57.092295] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:26.464 null0 00:44:26.724 [2024-12-16 03:05:57.124338] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:26.724 [2024-12-16 03:05:57.124617] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:26.724 03:05:57 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:26.724 [2024-12-16 03:05:57.156410] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:26.724 request: 00:44:26.724 { 00:44:26.724 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:26.724 "secure_channel": false, 00:44:26.724 "listen_address": { 00:44:26.724 "trtype": "tcp", 00:44:26.724 "traddr": "127.0.0.1", 00:44:26.724 "trsvcid": "4420" 00:44:26.724 }, 00:44:26.724 "method": "nvmf_subsystem_add_listener", 00:44:26.724 "req_id": 1 00:44:26.724 } 00:44:26.724 Got JSON-RPC error response 00:44:26.724 response: 00:44:26.724 { 00:44:26.724 "code": -32602, 00:44:26.724 "message": "Invalid parameters" 00:44:26.724 } 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:26.724 03:05:57 keyring_file -- keyring/file.sh@47 -- # bperfpid=1327189 00:44:26.724 03:05:57 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1327189 /var/tmp/bperf.sock 00:44:26.724 03:05:57 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1327189 ']' 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:26.724 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:26.724 03:05:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:26.724 [2024-12-16 03:05:57.210813] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:26.724 [2024-12-16 03:05:57.210870] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327189 ] 00:44:26.724 [2024-12-16 03:05:57.284046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:26.724 [2024-12-16 03:05:57.306723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:26.984 03:05:57 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:26.984 03:05:57 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:26.984 03:05:57 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HB3QOd3aso 00:44:26.984 03:05:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HB3QOd3aso 00:44:26.984 03:05:57 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CFvQwMIEw8 00:44:26.984 03:05:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CFvQwMIEw8 00:44:27.243 03:05:57 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:27.243 03:05:57 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:27.243 03:05:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:27.243 03:05:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:27.243 03:05:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:27.502 03:05:57 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.HB3QOd3aso == \/\t\m\p\/\t\m\p\.\H\B\3\Q\O\d\3\a\s\o ]] 00:44:27.502 03:05:57 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:27.502 03:05:57 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:27.502 03:05:57 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:27.502 03:05:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:27.502 03:05:57 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:27.760 03:05:58 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.CFvQwMIEw8 == \/\t\m\p\/\t\m\p\.\C\F\v\Q\w\M\I\E\w\8 ]] 00:44:27.760 03:05:58 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:27.760 03:05:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:27.760 03:05:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:27.760 03:05:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:27.760 03:05:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:27.760 03:05:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:27.760 03:05:58 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:27.760 03:05:58 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:27.760 03:05:58 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:27.760 03:05:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:27.760 03:05:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:27.760 03:05:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:27.760 03:05:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:28.019 03:05:58 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:28.019 03:05:58 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:28.019 03:05:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:28.277 [2024-12-16 03:05:58.741281] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:28.277 nvme0n1 00:44:28.277 03:05:58 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:28.277 03:05:58 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:28.277 03:05:58 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:28.277 03:05:58 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:28.277 03:05:58 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:28.277 03:05:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:28.536 03:05:59 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:28.536 03:05:59 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:28.536 03:05:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:28.536 03:05:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:28.536 03:05:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:28.536 03:05:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:28.536 03:05:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:28.795 03:05:59 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:28.795 03:05:59 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:28.795 Running I/O for 1 seconds... 00:44:29.731 19069.00 IOPS, 74.49 MiB/s 00:44:29.731 Latency(us) 00:44:29.731 [2024-12-16T02:06:00.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:29.731 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:29.731 nvme0n1 : 1.00 19116.46 74.67 0.00 0.00 6683.46 2917.91 13918.60 00:44:29.731 [2024-12-16T02:06:00.390Z] =================================================================================================================== 00:44:29.731 [2024-12-16T02:06:00.390Z] Total : 19116.46 74.67 0.00 0.00 6683.46 2917.91 13918.60 00:44:29.731 { 00:44:29.731 "results": [ 00:44:29.731 { 00:44:29.731 "job": "nvme0n1", 00:44:29.731 "core_mask": "0x2", 00:44:29.731 "workload": "randrw", 00:44:29.731 "percentage": 50, 00:44:29.731 "status": "finished", 00:44:29.731 "queue_depth": 128, 00:44:29.731 "io_size": 4096, 00:44:29.731 "runtime": 1.004318, 00:44:29.731 "iops": 19116.45514667665, 00:44:29.731 "mibps": 74.67365291670566, 00:44:29.731 "io_failed": 0, 00:44:29.731 "io_timeout": 0, 00:44:29.731 "avg_latency_us": 6683.455652204108, 00:44:29.731 "min_latency_us": 2917.9123809523808, 00:44:29.731 "max_latency_us": 13918.598095238096 00:44:29.731 } 00:44:29.731 ], 00:44:29.731 "core_count": 1 00:44:29.731 } 00:44:29.731 03:06:00 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:29.731 03:06:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:29.989 03:06:00 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:29.989 03:06:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:29.989 03:06:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:29.989 03:06:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:29.989 03:06:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:29.989 03:06:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:30.248 03:06:00 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:30.248 03:06:00 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:30.248 03:06:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:30.248 03:06:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:30.248 03:06:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:30.248 03:06:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:30.248 03:06:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:30.507 03:06:00 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:30.507 03:06:00 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:30.507 03:06:00 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:30.507 03:06:00 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:30.507 03:06:00 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:30.507 03:06:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:30.507 03:06:00 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:30.507 03:06:00 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:30.507 03:06:00 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:30.507 03:06:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:30.507 [2024-12-16 03:06:01.136899] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:30.507 [2024-12-16 03:06:01.137566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f656a0 (107): Transport endpoint is not connected 00:44:30.507 [2024-12-16 03:06:01.138560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f656a0 (9): Bad file descriptor 00:44:30.507 [2024-12-16 03:06:01.139561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:30.507 [2024-12-16 03:06:01.139570] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:30.507 [2024-12-16 03:06:01.139577] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:30.507 [2024-12-16 03:06:01.139585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:30.507 request: 00:44:30.507 { 00:44:30.507 "name": "nvme0", 00:44:30.507 "trtype": "tcp", 00:44:30.507 "traddr": "127.0.0.1", 00:44:30.507 "adrfam": "ipv4", 00:44:30.507 "trsvcid": "4420", 00:44:30.507 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:30.507 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:30.507 "prchk_reftag": false, 00:44:30.507 "prchk_guard": false, 00:44:30.507 "hdgst": false, 00:44:30.507 "ddgst": false, 00:44:30.507 "psk": "key1", 00:44:30.507 "allow_unrecognized_csi": false, 00:44:30.507 "method": "bdev_nvme_attach_controller", 00:44:30.507 "req_id": 1 00:44:30.507 } 00:44:30.507 Got JSON-RPC error response 00:44:30.507 response: 00:44:30.507 { 00:44:30.507 "code": -5, 00:44:30.507 "message": "Input/output error" 00:44:30.507 } 00:44:30.507 03:06:01 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:30.507 03:06:01 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:30.507 03:06:01 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:30.507 03:06:01 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:30.507 03:06:01 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:30.507 03:06:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:30.507 03:06:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:30.507 03:06:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:30.507 03:06:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:30.507 03:06:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:30.765 03:06:01 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:30.765 03:06:01 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:30.765 03:06:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:30.765 03:06:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:30.765 03:06:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:30.765 03:06:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:30.765 03:06:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:31.024 03:06:01 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:31.024 03:06:01 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:31.024 03:06:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:31.283 03:06:01 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:31.283 03:06:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:31.283 03:06:01 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:31.283 03:06:01 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:31.283 03:06:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:31.542 03:06:02 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:31.542 03:06:02 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.HB3QOd3aso 00:44:31.542 03:06:02 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.HB3QOd3aso 00:44:31.542 03:06:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:31.542 03:06:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.HB3QOd3aso 00:44:31.542 03:06:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:31.542 03:06:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:31.542 03:06:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:31.542 03:06:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:31.542 03:06:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HB3QOd3aso 00:44:31.542 03:06:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HB3QOd3aso 00:44:31.801 [2024-12-16 03:06:02.279656] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.HB3QOd3aso': 0100660 00:44:31.801 [2024-12-16 03:06:02.279685] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:31.801 request: 00:44:31.801 { 00:44:31.801 "name": "key0", 00:44:31.801 "path": "/tmp/tmp.HB3QOd3aso", 00:44:31.801 "method": "keyring_file_add_key", 00:44:31.801 "req_id": 1 00:44:31.801 } 00:44:31.801 Got JSON-RPC error response 00:44:31.801 response: 00:44:31.801 { 00:44:31.801 "code": -1, 00:44:31.801 "message": "Operation not permitted" 00:44:31.801 } 00:44:31.801 03:06:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:31.801 03:06:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:31.801 03:06:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:31.801 03:06:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:31.801 03:06:02 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.HB3QOd3aso 00:44:31.801 03:06:02 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.HB3QOd3aso 00:44:31.801 03:06:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.HB3QOd3aso 00:44:32.061 03:06:02 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.HB3QOd3aso 00:44:32.061 03:06:02 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:32.061 03:06:02 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:32.061 03:06:02 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:32.061 03:06:02 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:32.061 03:06:02 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:32.061 03:06:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:32.061 03:06:02 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:32.061 03:06:02 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:32.061 03:06:02 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:32.061 03:06:02 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:32.061 03:06:02 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:32.061 03:06:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:32.061 03:06:02 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:32.061 03:06:02 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:32.061 03:06:02 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:32.061 03:06:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:32.320 [2024-12-16 03:06:02.865196] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.HB3QOd3aso': No such file or directory 00:44:32.320 [2024-12-16 03:06:02.865218] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:32.320 [2024-12-16 03:06:02.865234] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:32.320 [2024-12-16 03:06:02.865241] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:32.320 [2024-12-16 03:06:02.865248] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:32.320 [2024-12-16 03:06:02.865254] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:32.320 request: 00:44:32.320 { 00:44:32.320 "name": "nvme0", 00:44:32.320 "trtype": "tcp", 00:44:32.320 "traddr": "127.0.0.1", 00:44:32.320 "adrfam": "ipv4", 00:44:32.320 "trsvcid": "4420", 00:44:32.320 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:32.320 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:32.320 "prchk_reftag": false, 00:44:32.320 "prchk_guard": false, 00:44:32.320 "hdgst": false, 00:44:32.320 "ddgst": false, 00:44:32.320 "psk": "key0", 00:44:32.320 "allow_unrecognized_csi": false, 00:44:32.320 "method": "bdev_nvme_attach_controller", 00:44:32.320 "req_id": 1 00:44:32.320 } 00:44:32.320 Got JSON-RPC error response 00:44:32.320 response: 00:44:32.320 { 00:44:32.320 "code": -19, 00:44:32.320 "message": "No such device" 00:44:32.320 } 00:44:32.320 03:06:02 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:32.320 03:06:02 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:32.320 03:06:02 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:32.320 03:06:02 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:32.320 03:06:02 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:32.320 03:06:02 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:32.580 03:06:03 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:32.580 03:06:03 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:32.580 03:06:03 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:32.580 03:06:03 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:32.580 03:06:03 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:32.580 03:06:03 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:32.580 03:06:03 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ue6B8mbpAY 00:44:32.580 03:06:03 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:32.580 03:06:03 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:32.580 03:06:03 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:32.580 03:06:03 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:32.580 03:06:03 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:32.580 03:06:03 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:32.580 03:06:03 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:32.580 03:06:03 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ue6B8mbpAY 00:44:32.580 03:06:03 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ue6B8mbpAY 00:44:32.580 03:06:03 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Ue6B8mbpAY 00:44:32.580 03:06:03 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ue6B8mbpAY 00:44:32.580 03:06:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ue6B8mbpAY 00:44:32.839 03:06:03 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:32.839 03:06:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:33.098 nvme0n1 00:44:33.098 03:06:03 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:33.098 03:06:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:33.098 03:06:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:33.098 03:06:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:33.098 03:06:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:33.098 03:06:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:33.422 03:06:03 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:33.422 03:06:03 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:33.422 03:06:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:33.422 03:06:03 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:33.422 03:06:03 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:33.422 03:06:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:33.422 03:06:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:33.422 03:06:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:33.753 03:06:04 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:33.753 03:06:04 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:33.753 03:06:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:33.753 03:06:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:33.753 03:06:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:33.753 03:06:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:33.753 03:06:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:33.753 03:06:04 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:33.753 03:06:04 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:33.753 03:06:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:34.011 03:06:04 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:34.012 03:06:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:34.012 03:06:04 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:34.271 03:06:04 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:34.271 03:06:04 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ue6B8mbpAY 00:44:34.271 03:06:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ue6B8mbpAY 00:44:34.271 03:06:04 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.CFvQwMIEw8 00:44:34.271 03:06:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.CFvQwMIEw8 00:44:34.531 03:06:05 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:34.531 03:06:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:34.789 nvme0n1 00:44:34.789 03:06:05 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:34.789 03:06:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:35.048 03:06:05 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:35.048 "subsystems": [ 00:44:35.048 { 00:44:35.048 "subsystem": "keyring", 00:44:35.048 "config": [ 00:44:35.048 { 00:44:35.048 "method": "keyring_file_add_key", 00:44:35.048 "params": { 00:44:35.048 "name": "key0", 00:44:35.048 "path": "/tmp/tmp.Ue6B8mbpAY" 00:44:35.048 } 00:44:35.048 }, 00:44:35.048 { 00:44:35.048 "method": "keyring_file_add_key", 00:44:35.048 "params": { 00:44:35.048 "name": "key1", 00:44:35.048 "path": "/tmp/tmp.CFvQwMIEw8" 00:44:35.048 } 00:44:35.048 } 00:44:35.048 ] 00:44:35.048 }, 00:44:35.048 { 00:44:35.048 "subsystem": "iobuf", 00:44:35.048 "config": [ 00:44:35.048 { 00:44:35.048 "method": "iobuf_set_options", 00:44:35.048 "params": { 00:44:35.048 "small_pool_count": 8192, 00:44:35.048 "large_pool_count": 1024, 00:44:35.048 "small_bufsize": 8192, 00:44:35.048 "large_bufsize": 135168, 00:44:35.048 "enable_numa": false 00:44:35.048 } 00:44:35.048 } 00:44:35.048 ] 00:44:35.048 }, 00:44:35.048 { 00:44:35.048 "subsystem": "sock", 00:44:35.048 "config": [ 00:44:35.048 { 00:44:35.048 "method": "sock_set_default_impl", 00:44:35.048 "params": { 00:44:35.048 "impl_name": "posix" 00:44:35.048 } 00:44:35.048 }, 00:44:35.048 { 00:44:35.048 "method": "sock_impl_set_options", 00:44:35.048 "params": { 00:44:35.048 "impl_name": "ssl", 00:44:35.048 "recv_buf_size": 4096, 00:44:35.048 "send_buf_size": 4096, 00:44:35.048 "enable_recv_pipe": true, 00:44:35.048 "enable_quickack": false, 00:44:35.048 "enable_placement_id": 0, 00:44:35.048 "enable_zerocopy_send_server": true, 00:44:35.048 "enable_zerocopy_send_client": false, 00:44:35.048 "zerocopy_threshold": 0, 00:44:35.048 "tls_version": 0, 00:44:35.048 "enable_ktls": false 00:44:35.048 } 00:44:35.048 }, 00:44:35.048 { 00:44:35.048 "method": "sock_impl_set_options", 00:44:35.048 "params": { 00:44:35.048 "impl_name": "posix", 00:44:35.048 "recv_buf_size": 2097152, 00:44:35.048 "send_buf_size": 2097152, 00:44:35.048 "enable_recv_pipe": true, 00:44:35.048 "enable_quickack": false, 00:44:35.048 "enable_placement_id": 0, 00:44:35.048 "enable_zerocopy_send_server": true, 00:44:35.048 "enable_zerocopy_send_client": false, 00:44:35.048 "zerocopy_threshold": 0, 00:44:35.048 "tls_version": 0, 00:44:35.048 "enable_ktls": false 00:44:35.048 } 00:44:35.048 } 00:44:35.048 ] 00:44:35.048 }, 00:44:35.048 { 00:44:35.048 "subsystem": "vmd", 00:44:35.048 "config": [] 00:44:35.048 }, 00:44:35.048 { 00:44:35.048 "subsystem": "accel", 00:44:35.048 "config": [ 00:44:35.048 { 00:44:35.048 "method": "accel_set_options", 00:44:35.048 "params": { 00:44:35.048 "small_cache_size": 128, 00:44:35.048 "large_cache_size": 16, 00:44:35.048 "task_count": 2048, 00:44:35.048 "sequence_count": 2048, 00:44:35.048 "buf_count": 2048 00:44:35.048 } 00:44:35.048 } 00:44:35.048 ] 00:44:35.048 }, 00:44:35.048 { 00:44:35.048 "subsystem": "bdev", 00:44:35.048 "config": [ 00:44:35.048 { 00:44:35.048 "method": "bdev_set_options", 00:44:35.048 "params": { 00:44:35.048 "bdev_io_pool_size": 65535, 00:44:35.048 "bdev_io_cache_size": 256, 00:44:35.048 "bdev_auto_examine": true, 00:44:35.048 "iobuf_small_cache_size": 128, 00:44:35.048 "iobuf_large_cache_size": 16 00:44:35.048 } 00:44:35.048 }, 00:44:35.048 { 00:44:35.048 "method": "bdev_raid_set_options", 00:44:35.048 "params": { 00:44:35.048 "process_window_size_kb": 1024, 00:44:35.048 "process_max_bandwidth_mb_sec": 0 00:44:35.048 } 00:44:35.048 }, 00:44:35.048 { 00:44:35.048 "method": "bdev_iscsi_set_options", 00:44:35.048 "params": { 00:44:35.048 "timeout_sec": 30 00:44:35.048 } 00:44:35.048 }, 00:44:35.048 { 00:44:35.048 "method": "bdev_nvme_set_options", 00:44:35.049 "params": { 00:44:35.049 "action_on_timeout": "none", 00:44:35.049 "timeout_us": 0, 00:44:35.049 "timeout_admin_us": 0, 00:44:35.049 "keep_alive_timeout_ms": 10000, 00:44:35.049 "arbitration_burst": 0, 00:44:35.049 "low_priority_weight": 0, 00:44:35.049 "medium_priority_weight": 0, 00:44:35.049 "high_priority_weight": 0, 00:44:35.049 "nvme_adminq_poll_period_us": 10000, 00:44:35.049 "nvme_ioq_poll_period_us": 0, 00:44:35.049 "io_queue_requests": 512, 00:44:35.049 "delay_cmd_submit": true, 00:44:35.049 "transport_retry_count": 4, 00:44:35.049 "bdev_retry_count": 3, 00:44:35.049 "transport_ack_timeout": 0, 00:44:35.049 "ctrlr_loss_timeout_sec": 0, 00:44:35.049 "reconnect_delay_sec": 0, 00:44:35.049 "fast_io_fail_timeout_sec": 0, 00:44:35.049 "disable_auto_failback": false, 00:44:35.049 "generate_uuids": false, 00:44:35.049 "transport_tos": 0, 00:44:35.049 "nvme_error_stat": false, 00:44:35.049 "rdma_srq_size": 0, 00:44:35.049 "io_path_stat": false, 00:44:35.049 "allow_accel_sequence": false, 00:44:35.049 "rdma_max_cq_size": 0, 00:44:35.049 "rdma_cm_event_timeout_ms": 0, 00:44:35.049 "dhchap_digests": [ 00:44:35.049 "sha256", 00:44:35.049 "sha384", 00:44:35.049 "sha512" 00:44:35.049 ], 00:44:35.049 "dhchap_dhgroups": [ 00:44:35.049 "null", 00:44:35.049 "ffdhe2048", 00:44:35.049 "ffdhe3072", 00:44:35.049 "ffdhe4096", 00:44:35.049 "ffdhe6144", 00:44:35.049 "ffdhe8192" 00:44:35.049 ], 00:44:35.049 "rdma_umr_per_io": false 00:44:35.049 } 00:44:35.049 }, 00:44:35.049 { 00:44:35.049 "method": "bdev_nvme_attach_controller", 00:44:35.049 "params": { 00:44:35.049 "name": "nvme0", 00:44:35.049 "trtype": "TCP", 00:44:35.049 "adrfam": "IPv4", 00:44:35.049 "traddr": "127.0.0.1", 00:44:35.049 "trsvcid": "4420", 00:44:35.049 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:35.049 "prchk_reftag": false, 00:44:35.049 "prchk_guard": false, 00:44:35.049 "ctrlr_loss_timeout_sec": 0, 00:44:35.049 "reconnect_delay_sec": 0, 00:44:35.049 "fast_io_fail_timeout_sec": 0, 00:44:35.049 "psk": "key0", 00:44:35.049 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:35.049 "hdgst": false, 00:44:35.049 "ddgst": false, 00:44:35.049 "multipath": "multipath" 00:44:35.049 } 00:44:35.049 }, 00:44:35.049 { 00:44:35.049 "method": "bdev_nvme_set_hotplug", 00:44:35.049 "params": { 00:44:35.049 "period_us": 100000, 00:44:35.049 "enable": false 00:44:35.049 } 00:44:35.049 }, 00:44:35.049 { 00:44:35.049 "method": "bdev_wait_for_examine" 00:44:35.049 } 00:44:35.049 ] 00:44:35.049 }, 00:44:35.049 { 00:44:35.049 "subsystem": "nbd", 00:44:35.049 "config": [] 00:44:35.049 } 00:44:35.049 ] 00:44:35.049 }' 00:44:35.049 03:06:05 keyring_file -- keyring/file.sh@115 -- # killprocess 1327189 00:44:35.049 03:06:05 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1327189 ']' 00:44:35.049 03:06:05 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1327189 00:44:35.049 03:06:05 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:35.049 03:06:05 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:35.049 03:06:05 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327189 00:44:35.049 03:06:05 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:35.049 03:06:05 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:35.049 03:06:05 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327189' 00:44:35.049 killing process with pid 1327189 00:44:35.049 03:06:05 keyring_file -- common/autotest_common.sh@973 -- # kill 1327189 00:44:35.049 Received shutdown signal, test time was about 1.000000 seconds 00:44:35.049 00:44:35.049 Latency(us) 00:44:35.049 [2024-12-16T02:06:05.708Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:35.049 [2024-12-16T02:06:05.708Z] =================================================================================================================== 00:44:35.049 [2024-12-16T02:06:05.708Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:35.049 03:06:05 keyring_file -- common/autotest_common.sh@978 -- # wait 1327189 00:44:35.308 03:06:05 keyring_file -- keyring/file.sh@118 -- # bperfpid=1328795 00:44:35.308 03:06:05 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1328795 /var/tmp/bperf.sock 00:44:35.308 03:06:05 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1328795 ']' 00:44:35.308 03:06:05 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:35.308 03:06:05 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:35.308 03:06:05 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:35.308 03:06:05 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:35.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:35.308 03:06:05 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:35.308 "subsystems": [ 00:44:35.308 { 00:44:35.308 "subsystem": "keyring", 00:44:35.308 "config": [ 00:44:35.308 { 00:44:35.308 "method": "keyring_file_add_key", 00:44:35.308 "params": { 00:44:35.308 "name": "key0", 00:44:35.308 "path": "/tmp/tmp.Ue6B8mbpAY" 00:44:35.308 } 00:44:35.308 }, 00:44:35.308 { 00:44:35.308 "method": "keyring_file_add_key", 00:44:35.308 "params": { 00:44:35.308 "name": "key1", 00:44:35.308 "path": "/tmp/tmp.CFvQwMIEw8" 00:44:35.308 } 00:44:35.308 } 00:44:35.308 ] 00:44:35.308 }, 00:44:35.308 { 00:44:35.308 "subsystem": "iobuf", 00:44:35.308 "config": [ 00:44:35.308 { 00:44:35.309 "method": "iobuf_set_options", 00:44:35.309 "params": { 00:44:35.309 "small_pool_count": 8192, 00:44:35.309 "large_pool_count": 1024, 00:44:35.309 "small_bufsize": 8192, 00:44:35.309 "large_bufsize": 135168, 00:44:35.309 "enable_numa": false 00:44:35.309 } 00:44:35.309 } 00:44:35.309 ] 00:44:35.309 }, 00:44:35.309 { 00:44:35.309 "subsystem": "sock", 00:44:35.309 "config": [ 00:44:35.309 { 00:44:35.309 "method": "sock_set_default_impl", 00:44:35.309 "params": { 00:44:35.309 "impl_name": "posix" 00:44:35.309 } 00:44:35.309 }, 00:44:35.309 { 00:44:35.309 "method": "sock_impl_set_options", 00:44:35.309 "params": { 00:44:35.309 "impl_name": "ssl", 00:44:35.309 "recv_buf_size": 4096, 00:44:35.309 "send_buf_size": 4096, 00:44:35.309 "enable_recv_pipe": true, 00:44:35.309 "enable_quickack": false, 00:44:35.309 "enable_placement_id": 0, 00:44:35.309 "enable_zerocopy_send_server": true, 00:44:35.309 "enable_zerocopy_send_client": false, 00:44:35.309 "zerocopy_threshold": 0, 00:44:35.309 "tls_version": 0, 00:44:35.309 "enable_ktls": false 00:44:35.309 } 00:44:35.309 }, 00:44:35.309 { 00:44:35.309 "method": "sock_impl_set_options", 00:44:35.309 "params": { 00:44:35.309 "impl_name": "posix", 00:44:35.309 "recv_buf_size": 2097152, 00:44:35.309 "send_buf_size": 2097152, 00:44:35.309 "enable_recv_pipe": true, 00:44:35.309 "enable_quickack": false, 00:44:35.309 "enable_placement_id": 0, 00:44:35.309 "enable_zerocopy_send_server": true, 00:44:35.309 "enable_zerocopy_send_client": false, 00:44:35.309 "zerocopy_threshold": 0, 00:44:35.309 "tls_version": 0, 00:44:35.309 "enable_ktls": false 00:44:35.309 } 00:44:35.309 } 00:44:35.309 ] 00:44:35.309 }, 00:44:35.309 { 00:44:35.309 "subsystem": "vmd", 00:44:35.309 "config": [] 00:44:35.309 }, 00:44:35.309 { 00:44:35.309 "subsystem": "accel", 00:44:35.309 "config": [ 00:44:35.309 { 00:44:35.309 "method": "accel_set_options", 00:44:35.309 "params": { 00:44:35.309 "small_cache_size": 128, 00:44:35.309 "large_cache_size": 16, 00:44:35.309 "task_count": 2048, 00:44:35.309 "sequence_count": 2048, 00:44:35.309 "buf_count": 2048 00:44:35.309 } 00:44:35.309 } 00:44:35.309 ] 00:44:35.309 }, 00:44:35.309 { 00:44:35.309 "subsystem": "bdev", 00:44:35.309 "config": [ 00:44:35.309 { 00:44:35.309 "method": "bdev_set_options", 00:44:35.309 "params": { 00:44:35.309 "bdev_io_pool_size": 65535, 00:44:35.309 "bdev_io_cache_size": 256, 00:44:35.309 "bdev_auto_examine": true, 00:44:35.309 "iobuf_small_cache_size": 128, 00:44:35.309 "iobuf_large_cache_size": 16 00:44:35.309 } 00:44:35.309 }, 00:44:35.309 { 00:44:35.309 "method": "bdev_raid_set_options", 00:44:35.309 "params": { 00:44:35.309 "process_window_size_kb": 1024, 00:44:35.309 "process_max_bandwidth_mb_sec": 0 00:44:35.309 } 00:44:35.309 }, 00:44:35.309 { 00:44:35.309 "method": "bdev_iscsi_set_options", 00:44:35.309 "params": { 00:44:35.309 "timeout_sec": 30 00:44:35.309 } 00:44:35.309 }, 00:44:35.309 { 00:44:35.309 "method": "bdev_nvme_set_options", 00:44:35.309 "params": { 00:44:35.309 "action_on_timeout": "none", 00:44:35.309 "timeout_us": 0, 00:44:35.309 "timeout_admin_us": 0, 00:44:35.309 "keep_alive_timeout_ms": 10000, 00:44:35.309 "arbitration_burst": 0, 00:44:35.309 "low_priority_weight": 0, 00:44:35.309 "medium_priority_weight": 0, 00:44:35.309 "high_priority_weight": 0, 00:44:35.309 "nvme_adminq_poll_period_us": 10000, 00:44:35.309 "nvme_ioq_poll_period_us": 0, 00:44:35.309 "io_queue_requests": 512, 00:44:35.309 "delay_cmd_submit": true, 00:44:35.309 "transport_retry_count": 4, 00:44:35.309 "bdev_retry_count": 3, 00:44:35.309 "transport_ack_timeout": 0, 00:44:35.309 "ctrlr_loss_timeout_sec": 0, 00:44:35.309 "reconnect_delay_sec": 0, 00:44:35.309 "fast_io_fail_timeout_sec": 0, 00:44:35.309 "disable_auto_failback": false, 00:44:35.309 "generate_uuids": false, 00:44:35.309 "transport_tos": 0, 00:44:35.309 "nvme_error_stat": false, 00:44:35.309 "rdma_srq_size": 0, 00:44:35.309 "io_path_stat": false, 00:44:35.309 "allow_accel_sequence": false, 00:44:35.309 "rdma_max_cq_size": 0, 00:44:35.309 "rdma_cm_event_timeout_ms": 0, 00:44:35.309 "dhchap_digests": [ 00:44:35.309 "sha256", 00:44:35.309 "sha384", 00:44:35.309 "sha512" 00:44:35.309 ], 00:44:35.309 "dhchap_dhgroups": [ 00:44:35.309 "null", 00:44:35.309 "ffdhe2048", 00:44:35.309 "ffdhe3072", 00:44:35.309 "ffdhe4096", 00:44:35.309 "ffdhe6144", 00:44:35.309 "ffdhe8192" 00:44:35.309 ], 00:44:35.309 "rdma_umr_per_io": false 00:44:35.309 } 00:44:35.309 }, 00:44:35.309 { 00:44:35.309 "method": "bdev_nvme_attach_controller", 00:44:35.309 "params": { 00:44:35.309 "name": "nvme0", 00:44:35.309 "trtype": "TCP", 00:44:35.309 "adrfam": "IPv4", 00:44:35.309 "traddr": "127.0.0.1", 00:44:35.309 "trsvcid": "4420", 00:44:35.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:35.309 "prchk_reftag": false, 00:44:35.309 "prchk_guard": false, 00:44:35.309 "ctrlr_loss_timeout_sec": 0, 00:44:35.309 "reconnect_delay_sec": 0, 00:44:35.309 "fast_io_fail_timeout_sec": 0, 00:44:35.309 "psk": "key0", 00:44:35.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:35.309 "hdgst": false, 00:44:35.309 "ddgst": false, 00:44:35.309 "multipath": "multipath" 00:44:35.309 } 00:44:35.309 }, 00:44:35.309 { 00:44:35.309 "method": "bdev_nvme_set_hotplug", 00:44:35.309 "params": { 00:44:35.309 "period_us": 100000, 00:44:35.309 "enable": false 00:44:35.309 } 00:44:35.309 }, 00:44:35.309 { 00:44:35.309 "method": "bdev_wait_for_examine" 00:44:35.309 } 00:44:35.309 ] 00:44:35.309 }, 00:44:35.309 { 00:44:35.309 "subsystem": "nbd", 00:44:35.309 "config": [] 00:44:35.309 } 00:44:35.309 ] 00:44:35.309 }' 00:44:35.309 03:06:05 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:35.309 03:06:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:35.309 [2024-12-16 03:06:05.870042] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:35.309 [2024-12-16 03:06:05.870096] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328795 ] 00:44:35.309 [2024-12-16 03:06:05.945020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:35.309 [2024-12-16 03:06:05.964287] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:35.568 [2024-12-16 03:06:06.121796] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:36.135 03:06:06 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:36.135 03:06:06 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:36.136 03:06:06 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:36.136 03:06:06 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:36.136 03:06:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:36.395 03:06:06 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:36.395 03:06:06 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:36.395 03:06:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:36.395 03:06:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:36.395 03:06:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:36.395 03:06:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:36.395 03:06:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:36.654 03:06:07 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:36.654 03:06:07 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:36.654 03:06:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:36.654 03:06:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:36.654 03:06:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:36.654 03:06:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:36.654 03:06:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:36.913 03:06:07 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:36.913 03:06:07 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:36.913 03:06:07 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:36.913 03:06:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:36.913 03:06:07 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:36.913 03:06:07 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:36.913 03:06:07 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Ue6B8mbpAY /tmp/tmp.CFvQwMIEw8 00:44:36.913 03:06:07 keyring_file -- keyring/file.sh@20 -- # killprocess 1328795 00:44:36.913 03:06:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1328795 ']' 00:44:36.913 03:06:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1328795 00:44:36.913 03:06:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:36.913 03:06:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:36.913 03:06:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1328795 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1328795' 00:44:37.173 killing process with pid 1328795 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@973 -- # kill 1328795 00:44:37.173 Received shutdown signal, test time was about 1.000000 seconds 00:44:37.173 00:44:37.173 Latency(us) 00:44:37.173 [2024-12-16T02:06:07.832Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:37.173 [2024-12-16T02:06:07.832Z] =================================================================================================================== 00:44:37.173 [2024-12-16T02:06:07.832Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@978 -- # wait 1328795 00:44:37.173 03:06:07 keyring_file -- keyring/file.sh@21 -- # killprocess 1327184 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1327184 ']' 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1327184 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327184 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327184' 00:44:37.173 killing process with pid 1327184 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@973 -- # kill 1327184 00:44:37.173 03:06:07 keyring_file -- common/autotest_common.sh@978 -- # wait 1327184 00:44:37.432 00:44:37.432 real 0m11.660s 00:44:37.432 user 0m29.065s 00:44:37.432 sys 0m2.680s 00:44:37.432 03:06:08 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:37.432 03:06:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:37.432 ************************************ 00:44:37.432 END TEST keyring_file 00:44:37.432 ************************************ 00:44:37.692 03:06:08 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:37.692 03:06:08 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:37.692 03:06:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:37.692 03:06:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:37.692 03:06:08 -- common/autotest_common.sh@10 -- # set +x 00:44:37.692 ************************************ 00:44:37.692 START TEST keyring_linux 00:44:37.692 ************************************ 00:44:37.692 03:06:08 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:37.692 Joined session keyring: 694945008 00:44:37.692 * Looking for test storage... 00:44:37.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:37.692 03:06:08 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:37.692 03:06:08 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:44:37.692 03:06:08 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:37.692 03:06:08 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:37.692 03:06:08 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:37.692 03:06:08 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:37.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:37.692 --rc genhtml_branch_coverage=1 00:44:37.692 --rc genhtml_function_coverage=1 00:44:37.692 --rc genhtml_legend=1 00:44:37.692 --rc geninfo_all_blocks=1 00:44:37.692 --rc geninfo_unexecuted_blocks=1 00:44:37.692 00:44:37.692 ' 00:44:37.692 03:06:08 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:37.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:37.692 --rc genhtml_branch_coverage=1 00:44:37.692 --rc genhtml_function_coverage=1 00:44:37.692 --rc genhtml_legend=1 00:44:37.692 --rc geninfo_all_blocks=1 00:44:37.692 --rc geninfo_unexecuted_blocks=1 00:44:37.692 00:44:37.692 ' 00:44:37.692 03:06:08 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:37.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:37.692 --rc genhtml_branch_coverage=1 00:44:37.692 --rc genhtml_function_coverage=1 00:44:37.692 --rc genhtml_legend=1 00:44:37.692 --rc geninfo_all_blocks=1 00:44:37.692 --rc geninfo_unexecuted_blocks=1 00:44:37.692 00:44:37.692 ' 00:44:37.692 03:06:08 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:37.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:37.692 --rc genhtml_branch_coverage=1 00:44:37.692 --rc genhtml_function_coverage=1 00:44:37.692 --rc genhtml_legend=1 00:44:37.692 --rc geninfo_all_blocks=1 00:44:37.692 --rc geninfo_unexecuted_blocks=1 00:44:37.692 00:44:37.692 ' 00:44:37.692 03:06:08 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:37.692 03:06:08 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:37.692 03:06:08 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:37.692 03:06:08 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:37.693 03:06:08 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:37.693 03:06:08 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:37.693 03:06:08 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:37.693 03:06:08 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:37.693 03:06:08 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:37.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:37.693 03:06:08 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:37.693 03:06:08 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:37.693 03:06:08 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:37.693 03:06:08 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:37.693 03:06:08 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:37.693 03:06:08 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:37.693 03:06:08 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:37.693 03:06:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:37.693 03:06:08 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:37.693 03:06:08 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:37.693 03:06:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:37.693 03:06:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:37.693 03:06:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:37.693 03:06:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:37.952 03:06:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:37.952 03:06:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:37.952 /tmp/:spdk-test:key0 00:44:37.952 03:06:08 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:37.952 03:06:08 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:37.952 03:06:08 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:37.952 03:06:08 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:37.952 03:06:08 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:37.952 03:06:08 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:37.952 03:06:08 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:37.952 03:06:08 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:37.952 03:06:08 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:37.952 03:06:08 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:37.952 03:06:08 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:37.952 03:06:08 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:37.952 03:06:08 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:37.952 03:06:08 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:37.952 03:06:08 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:37.952 /tmp/:spdk-test:key1 00:44:37.952 03:06:08 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:37.952 03:06:08 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1329725 00:44:37.952 03:06:08 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1329725 00:44:37.952 03:06:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1329725 ']' 00:44:37.952 03:06:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:37.952 03:06:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:37.952 03:06:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:37.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:37.952 03:06:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:37.952 03:06:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:37.952 [2024-12-16 03:06:08.478270] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:37.952 [2024-12-16 03:06:08.478321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329725 ] 00:44:37.952 [2024-12-16 03:06:08.552534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:37.952 [2024-12-16 03:06:08.574358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:38.211 03:06:08 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:38.211 03:06:08 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:38.211 03:06:08 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:38.211 03:06:08 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:38.211 03:06:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:38.211 [2024-12-16 03:06:08.789104] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:38.211 null0 00:44:38.211 [2024-12-16 03:06:08.821161] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:38.211 [2024-12-16 03:06:08.821436] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:38.211 03:06:08 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:38.211 03:06:08 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:38.211 431390182 00:44:38.211 03:06:08 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:38.211 411754790 00:44:38.211 03:06:08 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1329740 00:44:38.211 03:06:08 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1329740 /var/tmp/bperf.sock 00:44:38.211 03:06:08 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:38.211 03:06:08 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1329740 ']' 00:44:38.211 03:06:08 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:38.211 03:06:08 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:38.211 03:06:08 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:38.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:38.211 03:06:08 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:38.211 03:06:08 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:38.470 [2024-12-16 03:06:08.892620] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:38.470 [2024-12-16 03:06:08.892662] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1329740 ] 00:44:38.470 [2024-12-16 03:06:08.967421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:38.470 [2024-12-16 03:06:08.989615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:38.470 03:06:09 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:38.470 03:06:09 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:38.470 03:06:09 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:38.470 03:06:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:38.729 03:06:09 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:38.729 03:06:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:38.987 03:06:09 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:38.987 03:06:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:39.246 [2024-12-16 03:06:09.656771] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:39.246 nvme0n1 00:44:39.246 03:06:09 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:39.246 03:06:09 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:39.246 03:06:09 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:39.246 03:06:09 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:39.246 03:06:09 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:39.246 03:06:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.505 03:06:09 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:39.505 03:06:09 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:39.505 03:06:09 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:39.505 03:06:09 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:39.505 03:06:09 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.505 03:06:09 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:39.505 03:06:09 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:39.505 03:06:10 keyring_linux -- keyring/linux.sh@25 -- # sn=431390182 00:44:39.505 03:06:10 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:39.505 03:06:10 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:39.505 03:06:10 keyring_linux -- keyring/linux.sh@26 -- # [[ 431390182 == \4\3\1\3\9\0\1\8\2 ]] 00:44:39.505 03:06:10 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 431390182 00:44:39.505 03:06:10 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:39.505 03:06:10 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:39.763 Running I/O for 1 seconds... 00:44:40.699 21560.00 IOPS, 84.22 MiB/s 00:44:40.699 Latency(us) 00:44:40.699 [2024-12-16T02:06:11.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:40.699 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:40.699 nvme0n1 : 1.01 21562.19 84.23 0.00 0.00 5916.89 3698.10 8800.55 00:44:40.699 [2024-12-16T02:06:11.358Z] =================================================================================================================== 00:44:40.699 [2024-12-16T02:06:11.358Z] Total : 21562.19 84.23 0.00 0.00 5916.89 3698.10 8800.55 00:44:40.699 { 00:44:40.699 "results": [ 00:44:40.699 { 00:44:40.699 "job": "nvme0n1", 00:44:40.699 "core_mask": "0x2", 00:44:40.699 "workload": "randread", 00:44:40.699 "status": "finished", 00:44:40.699 "queue_depth": 128, 00:44:40.699 "io_size": 4096, 00:44:40.699 "runtime": 1.005881, 00:44:40.699 "iops": 21562.192744469772, 00:44:40.699 "mibps": 84.22731540808505, 00:44:40.699 "io_failed": 0, 00:44:40.699 "io_timeout": 0, 00:44:40.699 "avg_latency_us": 5916.8877358503, 00:44:40.699 "min_latency_us": 3698.102857142857, 00:44:40.699 "max_latency_us": 8800.548571428571 00:44:40.699 } 00:44:40.699 ], 00:44:40.699 "core_count": 1 00:44:40.699 } 00:44:40.699 03:06:11 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:40.699 03:06:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:40.958 03:06:11 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:40.958 03:06:11 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:40.958 03:06:11 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:40.958 03:06:11 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:40.958 03:06:11 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:40.958 03:06:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:41.216 03:06:11 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:41.216 03:06:11 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:41.216 03:06:11 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:41.216 03:06:11 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:41.216 03:06:11 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:41.216 03:06:11 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:41.216 03:06:11 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:41.216 03:06:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:41.216 03:06:11 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:41.216 03:06:11 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:41.216 03:06:11 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:41.216 03:06:11 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:41.216 [2024-12-16 03:06:11.842149] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:41.216 [2024-12-16 03:06:11.842858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a3d0 (107): Transport endpoint is not connected 00:44:41.216 [2024-12-16 03:06:11.843852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74a3d0 (9): Bad file descriptor 00:44:41.217 [2024-12-16 03:06:11.844853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:41.217 [2024-12-16 03:06:11.844864] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:41.217 [2024-12-16 03:06:11.844871] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:41.217 [2024-12-16 03:06:11.844879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:41.217 request: 00:44:41.217 { 00:44:41.217 "name": "nvme0", 00:44:41.217 "trtype": "tcp", 00:44:41.217 "traddr": "127.0.0.1", 00:44:41.217 "adrfam": "ipv4", 00:44:41.217 "trsvcid": "4420", 00:44:41.217 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:41.217 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:41.217 "prchk_reftag": false, 00:44:41.217 "prchk_guard": false, 00:44:41.217 "hdgst": false, 00:44:41.217 "ddgst": false, 00:44:41.217 "psk": ":spdk-test:key1", 00:44:41.217 "allow_unrecognized_csi": false, 00:44:41.217 "method": "bdev_nvme_attach_controller", 00:44:41.217 "req_id": 1 00:44:41.217 } 00:44:41.217 Got JSON-RPC error response 00:44:41.217 response: 00:44:41.217 { 00:44:41.217 "code": -5, 00:44:41.217 "message": "Input/output error" 00:44:41.217 } 00:44:41.217 03:06:11 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:41.217 03:06:11 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:41.217 03:06:11 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:41.217 03:06:11 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@33 -- # sn=431390182 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 431390182 00:44:41.217 1 links removed 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@33 -- # sn=411754790 00:44:41.217 03:06:11 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 411754790 00:44:41.217 1 links removed 00:44:41.476 03:06:11 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1329740 00:44:41.476 03:06:11 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1329740 ']' 00:44:41.476 03:06:11 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1329740 00:44:41.476 03:06:11 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:41.476 03:06:11 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:41.476 03:06:11 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1329740 00:44:41.476 03:06:11 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:41.476 03:06:11 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:41.476 03:06:11 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1329740' 00:44:41.476 killing process with pid 1329740 00:44:41.476 03:06:11 keyring_linux -- common/autotest_common.sh@973 -- # kill 1329740 00:44:41.476 Received shutdown signal, test time was about 1.000000 seconds 00:44:41.476 00:44:41.476 Latency(us) 00:44:41.476 [2024-12-16T02:06:12.135Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:41.476 [2024-12-16T02:06:12.135Z] =================================================================================================================== 00:44:41.476 [2024-12-16T02:06:12.135Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:41.476 03:06:11 keyring_linux -- common/autotest_common.sh@978 -- # wait 1329740 00:44:41.476 03:06:12 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1329725 00:44:41.476 03:06:12 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1329725 ']' 00:44:41.476 03:06:12 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1329725 00:44:41.476 03:06:12 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:41.476 03:06:12 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:41.476 03:06:12 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1329725 00:44:41.476 03:06:12 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:41.476 03:06:12 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:41.476 03:06:12 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1329725' 00:44:41.476 killing process with pid 1329725 00:44:41.476 03:06:12 keyring_linux -- common/autotest_common.sh@973 -- # kill 1329725 00:44:41.476 03:06:12 keyring_linux -- common/autotest_common.sh@978 -- # wait 1329725 00:44:42.044 00:44:42.044 real 0m4.284s 00:44:42.044 user 0m8.123s 00:44:42.044 sys 0m1.429s 00:44:42.044 03:06:12 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:42.044 03:06:12 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:42.044 ************************************ 00:44:42.044 END TEST keyring_linux 00:44:42.044 ************************************ 00:44:42.044 03:06:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:42.044 03:06:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:42.044 03:06:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:42.044 03:06:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:42.044 03:06:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:42.044 03:06:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:42.044 03:06:12 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:42.044 03:06:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:42.044 03:06:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:42.044 03:06:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:42.044 03:06:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:42.044 03:06:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:42.044 03:06:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:42.044 03:06:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:42.044 03:06:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:42.044 03:06:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:42.044 03:06:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:42.044 03:06:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:42.044 03:06:12 -- common/autotest_common.sh@10 -- # set +x 00:44:42.044 03:06:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:42.044 03:06:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:42.044 03:06:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:42.044 03:06:12 -- common/autotest_common.sh@10 -- # set +x 00:44:47.328 INFO: APP EXITING 00:44:47.328 INFO: killing all VMs 00:44:47.328 INFO: killing vhost app 00:44:47.328 INFO: EXIT DONE 00:44:50.618 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:44:50.618 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:44:50.618 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:44:53.905 Cleaning 00:44:53.905 Removing: /var/run/dpdk/spdk0/config 00:44:53.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:53.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:53.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:53.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:53.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:53.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:53.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:53.905 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:53.905 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:53.905 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:53.905 Removing: /var/run/dpdk/spdk1/config 00:44:53.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:53.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:53.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:53.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:53.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:53.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:53.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:53.905 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:53.905 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:53.905 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:53.905 Removing: /var/run/dpdk/spdk2/config 00:44:53.905 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:53.905 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:53.905 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:53.906 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:53.906 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:53.906 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:53.906 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:53.906 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:53.906 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:53.906 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:53.906 Removing: /var/run/dpdk/spdk3/config 00:44:53.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:53.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:53.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:53.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:53.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:53.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:53.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:53.906 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:53.906 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:53.906 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:53.906 Removing: /var/run/dpdk/spdk4/config 00:44:53.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:53.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:53.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:53.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:53.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:53.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:53.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:53.906 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:53.906 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:53.906 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:53.906 Removing: /dev/shm/bdev_svc_trace.1 00:44:53.906 Removing: /dev/shm/nvmf_trace.0 00:44:53.906 Removing: /dev/shm/spdk_tgt_trace.pid772395 00:44:53.906 Removing: /var/run/dpdk/spdk0 00:44:53.906 Removing: /var/run/dpdk/spdk1 00:44:53.906 Removing: /var/run/dpdk/spdk2 00:44:53.906 Removing: /var/run/dpdk/spdk3 00:44:53.906 Removing: /var/run/dpdk/spdk4 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1011369 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1015787 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1017349 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1019130 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1019160 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1019375 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1019578 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1020011 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1021676 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1022623 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1023041 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1025671 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1026112 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1026642 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1030726 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1036107 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1036108 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1036109 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1039825 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1043515 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1048394 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1083781 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1087666 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1093751 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1094946 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1096319 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1097617 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1102224 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1106614 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1110954 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1118208 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1118210 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1122709 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1122915 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1123132 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1123514 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1123591 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1124954 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1126655 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1128210 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1129801 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1131546 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1133104 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1138878 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1139438 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1141234 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1142210 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1147967 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1150958 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1156191 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1161412 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1170001 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1176856 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1176859 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1195397 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1196269 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1196919 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1197381 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1198098 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1198561 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1199176 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1199693 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1203757 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1204066 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1209985 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1210106 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1215416 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1219407 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1229116 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1229576 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1233745 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1233982 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1238056 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1244181 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1246688 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1256428 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1264944 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1266713 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1267570 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1283218 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1286961 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1290097 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1297662 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1297754 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1302822 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1304642 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1306469 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1307670 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1309594 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1310638 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1319204 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1319774 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1320311 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1322522 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1322974 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1323430 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1327184 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1327189 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1328795 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1329725 00:44:53.906 Removing: /var/run/dpdk/spdk_pid1329740 00:44:53.906 Removing: /var/run/dpdk/spdk_pid770305 00:44:53.906 Removing: /var/run/dpdk/spdk_pid771332 00:44:53.906 Removing: /var/run/dpdk/spdk_pid772395 00:44:53.906 Removing: /var/run/dpdk/spdk_pid773016 00:44:53.906 Removing: /var/run/dpdk/spdk_pid773940 00:44:53.906 Removing: /var/run/dpdk/spdk_pid774040 00:44:53.906 Removing: /var/run/dpdk/spdk_pid775085 00:44:53.906 Removing: /var/run/dpdk/spdk_pid775130 00:44:53.906 Removing: /var/run/dpdk/spdk_pid775478 00:44:53.906 Removing: /var/run/dpdk/spdk_pid776958 00:44:53.906 Removing: /var/run/dpdk/spdk_pid778393 00:44:53.906 Removing: /var/run/dpdk/spdk_pid778822 00:44:53.906 Removing: /var/run/dpdk/spdk_pid779013 00:44:53.906 Removing: /var/run/dpdk/spdk_pid779209 00:44:53.906 Removing: /var/run/dpdk/spdk_pid779765 00:44:53.906 Removing: /var/run/dpdk/spdk_pid780125 00:44:53.906 Removing: /var/run/dpdk/spdk_pid780370 00:44:53.906 Removing: /var/run/dpdk/spdk_pid780645 00:44:54.165 Removing: /var/run/dpdk/spdk_pid781383 00:44:54.165 Removing: /var/run/dpdk/spdk_pid784309 00:44:54.165 Removing: /var/run/dpdk/spdk_pid784565 00:44:54.165 Removing: /var/run/dpdk/spdk_pid784815 00:44:54.165 Removing: /var/run/dpdk/spdk_pid784821 00:44:54.165 Removing: /var/run/dpdk/spdk_pid785303 00:44:54.165 Removing: /var/run/dpdk/spdk_pid785371 00:44:54.165 Removing: /var/run/dpdk/spdk_pid785788 00:44:54.165 Removing: /var/run/dpdk/spdk_pid785798 00:44:54.165 Removing: /var/run/dpdk/spdk_pid786071 00:44:54.165 Removing: /var/run/dpdk/spdk_pid786231 00:44:54.165 Removing: /var/run/dpdk/spdk_pid786337 00:44:54.165 Removing: /var/run/dpdk/spdk_pid786545 00:44:54.165 Removing: /var/run/dpdk/spdk_pid786927 00:44:54.165 Removing: /var/run/dpdk/spdk_pid787135 00:44:54.165 Removing: /var/run/dpdk/spdk_pid787429 00:44:54.165 Removing: /var/run/dpdk/spdk_pid791267 00:44:54.166 Removing: /var/run/dpdk/spdk_pid795463 00:44:54.166 Removing: /var/run/dpdk/spdk_pid805541 00:44:54.166 Removing: /var/run/dpdk/spdk_pid806082 00:44:54.166 Removing: /var/run/dpdk/spdk_pid810369 00:44:54.166 Removing: /var/run/dpdk/spdk_pid810654 00:44:54.166 Removing: /var/run/dpdk/spdk_pid814844 00:44:54.166 Removing: /var/run/dpdk/spdk_pid820607 00:44:54.166 Removing: /var/run/dpdk/spdk_pid823483 00:44:54.166 Removing: /var/run/dpdk/spdk_pid833890 00:44:54.166 Removing: /var/run/dpdk/spdk_pid842854 00:44:54.166 Removing: /var/run/dpdk/spdk_pid844642 00:44:54.166 Removing: /var/run/dpdk/spdk_pid845540 00:44:54.166 Removing: /var/run/dpdk/spdk_pid862298 00:44:54.166 Removing: /var/run/dpdk/spdk_pid866303 00:44:54.166 Removing: /var/run/dpdk/spdk_pid947949 00:44:54.166 Removing: /var/run/dpdk/spdk_pid953750 00:44:54.166 Removing: /var/run/dpdk/spdk_pid959397 00:44:54.166 Removing: /var/run/dpdk/spdk_pid965742 00:44:54.166 Removing: /var/run/dpdk/spdk_pid965750 00:44:54.166 Removing: /var/run/dpdk/spdk_pid966637 00:44:54.166 Removing: /var/run/dpdk/spdk_pid967529 00:44:54.166 Removing: /var/run/dpdk/spdk_pid968337 00:44:54.166 Removing: /var/run/dpdk/spdk_pid968872 00:44:54.166 Removing: /var/run/dpdk/spdk_pid968881 00:44:54.166 Removing: /var/run/dpdk/spdk_pid969107 00:44:54.166 Removing: /var/run/dpdk/spdk_pid969329 00:44:54.166 Removing: /var/run/dpdk/spdk_pid969333 00:44:54.166 Removing: /var/run/dpdk/spdk_pid970220 00:44:54.166 Removing: /var/run/dpdk/spdk_pid971031 00:44:54.166 Removing: /var/run/dpdk/spdk_pid971809 00:44:54.166 Removing: /var/run/dpdk/spdk_pid972467 00:44:54.166 Removing: /var/run/dpdk/spdk_pid972642 00:44:54.166 Removing: /var/run/dpdk/spdk_pid972898 00:44:54.166 Removing: /var/run/dpdk/spdk_pid973897 00:44:54.166 Removing: /var/run/dpdk/spdk_pid974849 00:44:54.166 Removing: /var/run/dpdk/spdk_pid982960 00:44:54.166 Clean 00:44:54.425 03:06:24 -- common/autotest_common.sh@1453 -- # return 0 00:44:54.425 03:06:24 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:44:54.425 03:06:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:54.425 03:06:24 -- common/autotest_common.sh@10 -- # set +x 00:44:54.425 03:06:24 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:44:54.425 03:06:24 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:54.425 03:06:24 -- common/autotest_common.sh@10 -- # set +x 00:44:54.425 03:06:24 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:54.425 03:06:24 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:54.425 03:06:24 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:54.425 03:06:24 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:44:54.425 03:06:24 -- spdk/autotest.sh@398 -- # hostname 00:44:54.425 03:06:24 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:54.695 geninfo: WARNING: invalid characters removed from testname! 00:45:16.649 03:06:45 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:18.025 03:06:48 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:19.929 03:06:50 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:21.833 03:06:52 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:23.737 03:06:54 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:25.641 03:06:56 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:27.547 03:06:57 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:27.547 03:06:57 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:27.547 03:06:57 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:27.547 03:06:57 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:27.547 03:06:57 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:27.547 03:06:57 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:27.547 + [[ -n 675614 ]] 00:45:27.547 + sudo kill 675614 00:45:27.556 [Pipeline] } 00:45:27.571 [Pipeline] // stage 00:45:27.577 [Pipeline] } 00:45:27.591 [Pipeline] // timeout 00:45:27.596 [Pipeline] } 00:45:27.610 [Pipeline] // catchError 00:45:27.615 [Pipeline] } 00:45:27.630 [Pipeline] // wrap 00:45:27.634 [Pipeline] } 00:45:27.646 [Pipeline] // catchError 00:45:27.654 [Pipeline] stage 00:45:27.656 [Pipeline] { (Epilogue) 00:45:27.667 [Pipeline] catchError 00:45:27.669 [Pipeline] { 00:45:27.681 [Pipeline] echo 00:45:27.682 Cleanup processes 00:45:27.688 [Pipeline] sh 00:45:27.975 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:27.975 1341433 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:27.988 [Pipeline] sh 00:45:28.274 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:28.274 ++ grep -v 'sudo pgrep' 00:45:28.274 ++ awk '{print $1}' 00:45:28.274 + sudo kill -9 00:45:28.274 + true 00:45:28.285 [Pipeline] sh 00:45:28.570 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:40.789 [Pipeline] sh 00:45:41.074 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:41.074 Artifacts sizes are good 00:45:41.088 [Pipeline] archiveArtifacts 00:45:41.095 Archiving artifacts 00:45:41.243 [Pipeline] sh 00:45:41.529 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:41.542 [Pipeline] cleanWs 00:45:41.552 [WS-CLEANUP] Deleting project workspace... 00:45:41.552 [WS-CLEANUP] Deferred wipeout is used... 00:45:41.558 [WS-CLEANUP] done 00:45:41.560 [Pipeline] } 00:45:41.576 [Pipeline] // catchError 00:45:41.588 [Pipeline] sh 00:45:41.951 + logger -p user.info -t JENKINS-CI 00:45:42.016 [Pipeline] } 00:45:42.029 [Pipeline] // stage 00:45:42.034 [Pipeline] } 00:45:42.048 [Pipeline] // node 00:45:42.053 [Pipeline] End of Pipeline 00:45:42.110 Finished: SUCCESS